<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The Luminary Edition]]></title><description><![CDATA[One Proton Block Producer's Story]]></description><link>https://blog.luminaryvisn.com/</link><generator>Ghost 4.38</generator><lastBuildDate>Tue, 07 Apr 2026 19:37:45 GMT</lastBuildDate><atom:link href="https://blog.luminaryvisn.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Linking an Ethereum Key to WebAuth Wallet]]></title><description><![CDATA[<p><strong>Authorizing Transactions With Metamask</strong></p><p>One of the ways that you can manage your Proton chain&apos;s wallet is to link it to an Ethereum account that you control. This can also act as a fail-safe lifeline in the event that you cannot access your Proton private key but do</p>]]></description><link>https://blog.luminaryvisn.com/linking-an-ethereum-key-to-webauth-wallet/</link><guid isPermaLink="false">69621a144f03603d5bf5f8be</guid><dc:creator><![CDATA[Chev Young]]></dc:creator><pubDate>Sat, 10 Jan 2026 09:57:02 GMT</pubDate><media:content url="https://blog.luminaryvisn.com/content/images/2026/01/ChatGPT-Image-Jan-10--2026--04_56_43-AM.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.luminaryvisn.com/content/images/2026/01/ChatGPT-Image-Jan-10--2026--04_56_43-AM.png" alt="Linking an Ethereum Key to WebAuth Wallet"><p><strong>Authorizing Transactions With Metamask</strong></p><p>One of the ways that you can manage your Proton chain&apos;s wallet is to link it to an Ethereum account that you control. This can also act as a fail-safe lifeline in the event that you cannot access your Proton private key but do have access to your Ethereum wallet. That is because just like <a href="https://webauth.com/start">WebAuth</a> can use your mobile device to authorize transactions on your desktop computer (or another device), you can also simply sign transactions with your Ethereum wallet. This is a brief tutorial on how to link an ETH account via Metamask &#xA0;Wallet to an XPR account via WebAuth Wallet.</p><p><strong>Linking an Ether Key</strong></p><p>This guild assumes you have WebAuth installed on your mobile device. Navigate in your browser to <a href="https://webauth.com/login">webauth.com/login</a> and click &quot;login with mobile&quot;.</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2026/01/image.png" class="kg-image" alt="Linking an Ethereum Key to WebAuth Wallet" loading="lazy" width="1096" height="998" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2026/01/image.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2026/01/image.png 1000w, https://blog.luminaryvisn.com/content/images/2026/01/image.png 1096w" sizes="(min-width: 720px) 720px"></figure><p>Click mobile, open your phone&apos;s WebAuth wallet. and scan the QR code that pops up, and you will be logged in.</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2026/01/image-1.png" class="kg-image" alt="Linking an Ethereum Key to WebAuth Wallet" loading="lazy" width="1096" height="998" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2026/01/image-1.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2026/01/image-1.png 1000w, https://blog.luminaryvisn.com/content/images/2026/01/image-1.png 1096w" sizes="(min-width: 720px) 720px"></figure><p>Once logged in, click the little key shaped icon. I already have an Ethereum key added, as well as a couple of phones.</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2026/01/image-3.png" class="kg-image" alt="Linking an Ethereum Key to WebAuth Wallet" loading="lazy" width="1096" height="998" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2026/01/image-3.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2026/01/image-3.png 1000w, https://blog.luminaryvisn.com/content/images/2026/01/image-3.png 1096w" sizes="(min-width: 720px) 720px"></figure><p>Next, click &quot;Add new device&quot;, and select &quot;Ethereum&quot;. You will see Metamask open, which will allow you to select an account to link. </p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2026/01/image-4.png" class="kg-image" alt="Linking an Ethereum Key to WebAuth Wallet" loading="lazy" width="1251" height="1046" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2026/01/image-4.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2026/01/image-4.png 1000w, https://blog.luminaryvisn.com/content/images/2026/01/image-4.png 1251w" sizes="(min-width: 720px) 720px"></figure><p>After clicking &quot;confirm&quot;, your key will be linked to your WebAuth, allowing you to authorize transactions using your Metamask wallet.</p><p><strong>Conclusion</strong></p><p>Having an Etherum key linked allows you to authorize transactions all across the XPR/Proton and Metal ecosystems using Metamask. It&apos;s very useful when trading on MetalX, for example, because I don&apos;t have to open my phone every time i want to sign a transaction.</p><p>That&apos;s all for today. Thanks for reading<em> The Luminary Edition</em>.</p>]]></content:encoded></item><item><title><![CDATA[Proton Hyperion Setup In 2025: Part II]]></title><description><![CDATA[<p><strong>Overview</strong></p><p>In the <a href="https://blog.luminaryvisn.com/proton-hyperion-setup-2025/">previous</a> post, I explained how to configure <code>nodeos</code> for a state history node on the Proton chain. This post explains how to complete the rest of the process, and assumes you have a proton node that is fully synchronized with the entire state history already indexed to</p>]]></description><link>https://blog.luminaryvisn.com/proton-hyperion-setup-in-2025-part-ii/</link><guid isPermaLink="false">6901ab5f4f03603d5bf5f380</guid><dc:creator><![CDATA[Chev Young]]></dc:creator><pubDate>Sat, 01 Nov 2025 15:43:40 GMT</pubDate><media:content url="https://blog.luminaryvisn.com/content/images/2025/11/part2.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.luminaryvisn.com/content/images/2025/11/part2.png" alt="Proton Hyperion Setup In 2025: Part II"><p><strong>Overview</strong></p><p>In the <a href="https://blog.luminaryvisn.com/proton-hyperion-setup-2025/">previous</a> post, I explained how to configure <code>nodeos</code> for a state history node on the Proton chain. This post explains how to complete the rest of the process, and assumes you have a proton node that is fully synchronized with the entire state history already indexed to the head block. </p><p><strong>The Process</strong></p><ul><li>Installation of the remaining requirements</li><li>Configuration of Hyperion for Proton Chain, your brand&apos;s URL, logo, contact data, alerts, etc</li><li>Run <code>./run proton-indexer</code> in ABI scan mode (took about 10 hours or so in my case)</li><li>Modify the <code>config/chains/proton-chain.json</code> file after ABI scan completes</li><li>Run the index scan. This must be done in batches. How many depends on when you&apos;re reading this and what kind of hardware you&apos;ve got. In my case, it&apos;s looking like it&apos;ll be something like 17 - 24 batches, each taking a few hours to complete.</li><li>When this is completed, launch &#xA0;<code>./run proton-api</code>, and place it behind a reverse proxy server such as Nginx for public access under a subdomain of your entity&apos;s domain.</li></ul><p><strong>Installing Requirements</strong></p><p>This series is basically intended to offer some guidance because the official Hyperion documentation is a little daunting, and the PDF file found in the proton.start repository is somewhat outdated. However, so far, and with a little luck and God willing, I think that it&apos;s sufficient to say that between those two documents and the blogs from EOSphere, the infomation is out there. Don&apos;t be afraid to ask for help, but most if not all of the information that you need can be found in the Hyperion docs, EOSphere blogs, and the Proton PDF guild.</p><p><a href="https://hyperion.docs.eosrio.io/4.0/providers/get-started/">Hyperion documentation</a> | <a href="https://github.com/XPRNetwork/xpr.start/blob/master/Proton%20Chain%20-%20Hyperion%20Guide.pdf">Proton&apos;s Hyperion Guild</a> | <a href="https://medium.com/eosphere/subpage/wax-technical-how-to">EOSphere Blog</a></p><p>If you&apos;re stumbling across this post and have not yet set up your state history node, see the <a href="https://blog.luminaryvisn.com/proton-hyperion-setup-2025/">part 1</a> of this series for detailed instructions setting that up. </p><p>After your nodeos instance is synchronized with a full state history block log, you need to install a whole bunch of programs, including:</p><!--kg-card-begin: markdown--><ul>
<li>Elasticsearch 8.7+ - Required for historical data indexing.</li>
<li>Kibana 8.7+ - Optional, for visualizing Elasticsearch data.</li>
<li>RabbitMQ (v 3.12+) - Required for message queuing between indexer stages.</li>
<li>Redis - Required for caching and inter-process communication.</li>
<li>MongoDB Community Server - Conditionally Required. Needed only if enabling state tracking features (features.tables.* or features.contract_state.enabled in chain config). See MongoDB section below for details.</li>
<li>Node.js v22 - Required runtime environment.</li>
<li>PM2 - Required process manager for Node.js applications.</li>
<li>NODEOS (spring 1.1.2 or leap 5.0.3 recommended) - Required Antelope node with State History Plugin (SHIP) enabled.</li>
</ul>
<!--kg-card-end: markdown--><p>Although it&apos;s recommended to manually install everything, I&apos;ve found that the automated installation shell script actually works fine, although you will need to manually install the correct version of NodeJs, which as of the time I am writing this, according to the Hyperion documentation, is 22. The installation script installed version 24. </p><p>It also installed node locally, rather than globally, and while that may have worked, I was trying to follow the <a href="https://github.com/XPRNetwork/xpr.start/blob/master/Proton%20Chain%20-%20Hyperion%20Guide.pdf">PDF</a> in the proton.start, which stated that node should be installed globally with sudo. In any case, after I ran the installation script, I installed node version 22 globally and removed from my eos user&apos;s bashrc the PATH updates pointing to the local version 24 of nodejs. After doing that, I ran as eos user with sudo the following commands (referanced from the xpr.start PDF):</p><!--kg-card-begin: markdown--><pre>
sudo npm install pm2@latest -g
sudo pm2 startup
</pre><!--kg-card-end: markdown--><p>Next, I read through the manual installation process found on the official Hyperion documentation, linked above, ensuring that the installer script didn&apos;t miss anything. I would suggest doing the same, as there are some important settings particularly for ElasticSearch that need to be changed. Other than ElasticSearch, the installer pretty much took care of everything. However, for security reasons, you will want to change and note the passwords for mongo and rabbitmq. </p><p>Take care to note the credentials for mongo and rabbitmq, as well as the password found in the file named <code>elastic.pass</code> found in your eos user&apos;s . <code>~/.hyperion-installer</code> directory, as you&apos;ll need them later.</p><p>At this point, I cloned the Hyperion repository and began to figure out how to configure it. </p><!--kg-card-begin: markdown--><pre>
git clone https://github.com/eosrio/hyperion-history-api.git
cd hyperion-history-api
npm ci
</pre><!--kg-card-end: markdown--><p>When I ran npm ci, I got an error about a missing library. The solution ended up being to make use of a third party repository and install the missing library. This will likely be an issue if you are on Ubuntu 22 as recommended, so if <code>npm ci</code> throws, just go ahead and run these commands:</p><!--kg-card-begin: markdown--><pre>
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install --only-upgrade libstdc++6
</pre><!--kg-card-end: markdown--><p>Now, running <code>npm ci</code> will work. </p><p><strong>Configuring Hyperion</strong></p><p>Now, you can install the program, and create the <code>connections.json</code> file automatically with the hype-config tool found in the repository that you just cloned.</p><p>Install the program:</p><p><code>sudo npm install</code></p><p>Creating the connections config file:</p><p><code>./hyp-config connections init</code></p><p>You will be prompted for the connection information for all of the components, and other than the passwords that you hopefully changed, and including the elastic system password, you can accept the default options. Note that apparently when the Proton PDf file was written, the <code>hyp-config</code> tool must not have existed, and so far it seems you can safely use the tool to generate that file. &#xA0;If I recall correctly, no fields needed to be updated in the <code>connections.json</code> file.</p><p>To make sure that all worked, run <code>./hyp-config connections test</code> to ensure that everything works. In my case, everything did work. I skipped over the section of the PDF that says &quot;Edit ecosystem.config.js file located in the hyperion-history-api directory&quot;, because I didn&apos;t find this anywhere in the current Hyperion documentation, and went on to configuring the chain, as per the Hyperion documentation.</p><!--kg-card-begin: markdown--><pre>
http=http://127.0.0.1:8888
ship=ws://127.0.0.1:8080
./hyp-config chains new proton --http $http --ship $ship
</pre>
<!--kg-card-end: markdown--><p>That command should produce <code>config/chains/proton.config.json</code>, which you will need to edit. I changed all of the fields that were different from the configuration present in the PDF file. I suppose it is safe to post that configuration here. I added comments to the fields that must be changed after the first index run, they&apos;re marked with a preceding <code>_comment</code> :</p><p><em><strong>Warning:</strong> </em>The proton PDF has a mistake in it that caused me a lot of banging my head against the wall:<em> ds_pool_size cannot be 1 if you have ds_queues 2</em>. Set scaling.ds_pool_size to 2.</p><!--kg-card-begin: markdown--><pre>{
  &quot;api&quot;: {
    &quot;enabled&quot;: true,
    &quot;pm2_scaling&quot;: 1,
    &quot;chain_name&quot;: &quot;proton&quot;,
    &quot;server_addr&quot;: &quot;0.0.0.0&quot;,
    &quot;server_port&quot;: 7000,
    &quot;stream_port&quot;: 1234,
    &quot;stream_scroll_limit&quot;: -1,
    &quot;stream_scroll_batch&quot;: 500,
    &quot;server_name&quot;: &quot;hyperion.luminaryvisn.com&quot;,
    &quot;provider_name&quot;: &quot;Luminary Visions, LLC&quot;,
    &quot;provider_url&quot;: &quot;https://www.luminaryvisn.com&quot;,
    &quot;chain_api&quot;: &quot;http://127.0.0.1:8888&quot;,
    &quot;push_api&quot;: &quot;https://api.luminaryvisn.com&quot;,
    &quot;chain_logo_url&quot;: &quot;https://i.postimg.cc/0y5zcgwL/proton-xpr-logo.png&quot;,
    &quot;explorer&quot;: {
      &quot;home_redirect&quot;: false,
      &quot;upstream&quot;: &quot;&quot;,
      &quot;theme&quot;: &quot;&quot;
    },
    &quot;_comment_enable_caching&quot;: &quot;set to true after bulk indexing&quot;,
    &quot;enable_caching&quot;: false,
    &quot;cache_life&quot;: 1,
    &quot;limits&quot;: {
      &quot;get_actions&quot;: 1000,
      &quot;get_voters&quot;: 100,
      &quot;get_links&quot;: 1000,
      &quot;get_deltas&quot;: 1000,
      &quot;get_trx_actions&quot;: 200
    },
    &quot;access_log&quot;: false,
    &quot;chain_api_error_log&quot;: false,
    &quot;log_errors&quot;: false,
    &quot;custom_core_token&quot;: &quot;XPR&quot;,
    &quot;enable_export_action&quot;: false,
    &quot;disable_rate_limit&quot;: false,
    &quot;rate_limit_rpm&quot;: 1000,
    &quot;rate_limit_allow&quot;: [],
    &quot;_comment_disable_tx_cache&quot;: &quot;set to false after bulk indexing&quot;,
    &quot;disable_tx_cache&quot;: true,
    &quot;tx_cache_expiration_sec&quot;: 3600,
    &quot;v1_chain_cache&quot;: [
      {
        &quot;path&quot;: &quot;get_block&quot;,
        &quot;ttl&quot;: 3000
      },
      {
        &quot;path&quot;: &quot;get_info&quot;,
        &quot;ttl&quot;: 500
      }
    ],
    &quot;node_max_old_space_size&quot;: 1024,
    &quot;node_trace_deprecation&quot;: false,
    &quot;node_trace_warnings&quot;: false
  },
  &quot;indexer&quot;: {
    &quot;enabled&quot;: true,
    &quot;start_on&quot;: 1,
    &quot;stop_on&quot;: 0,
    &quot;rewrite&quot;: false,
    &quot;purge_queues&quot;: true,
    &quot;_comment_live_reader&quot;: &quot;set to true after bulk indexing&quot;,
    &quot;live_reader&quot;: false,
    &quot;live_only_mode&quot;: false,
    &quot;_comment_abi_scan_mode&quot;: &quot;IMPORTANT set to false after initial ABI scan&quot;,
    &quot;abi_scan_mode&quot;: true,
    &quot;fetch_block&quot;: true,
    &quot;fetch_traces&quot;: true,
    &quot;fetch_deltas&quot;: true,
    &quot;disable_reading&quot;: false,
    &quot;disable_indexing&quot;: false,
    &quot;process_deltas&quot;: true,
    &quot;node_max_old_space_size&quot;: 4096,
    &quot;node_trace_deprecation&quot;: false,
    &quot;node_trace_warnings&quot;: false
  },
  &quot;settings&quot;: {
    &quot;preview&quot;: false,
    &quot;chain&quot;: &quot;proton&quot;,
    &quot;eosio_alias&quot;: &quot;eosio&quot;,
    &quot;parser&quot;: &quot;3.2&quot;,
    &quot;auto_stop&quot;: 0,
    &quot;index_version&quot;: &quot;v1&quot;,
    &quot;debug&quot;: false,
    &quot;bp_logs&quot;: false,
    &quot;bp_monitoring&quot;: false,
    &quot;ipc_debug_rate&quot;: 60000,
    &quot;allow_custom_abi&quot;: false,
    &quot;rate_monitoring&quot;: true,
    &quot;max_ws_payload_mb&quot;: 256,
    &quot;ds_profiling&quot;: false,
    &quot;auto_mode_switch&quot;: false,
    &quot;use_global_agent&quot;: false,
    &quot;index_partition_size&quot;: 10000000,
    &quot;max_retained_blocks&quot;: 0,
    &quot;es_replicas&quot;: 0
  },
  &quot;blacklists&quot;: {
    &quot;actions&quot;: [],
    &quot;deltas&quot;: []
  },
  &quot;whitelists&quot;: {
    &quot;actions&quot;: [],
    &quot;deltas&quot;: [],
    &quot;max_depth&quot;: 10,
    &quot;root_only&quot;: false
  },
  &quot;scaling&quot;: {
    &quot;readers&quot;: 2,
    &quot;ds_queues&quot;: 2,
    &quot;ds_threads&quot;: 1,
    &quot;ds_pool_size&quot;: 2,
    &quot;indexing_queues&quot;: 1,
    &quot;ad_idx_queues&quot;: 1,
    &quot;dyn_idx_queues&quot;: 1,
    &quot;max_autoscale&quot;: 4,
    &quot;batch_size&quot;: 10000,
    &quot;resume_trigger&quot;: 5000,
    &quot;auto_scale_trigger&quot;: 20000,
    &quot;block_queue_limit&quot;: 10000,
    &quot;max_queue_limit&quot;: 50000,
    &quot;routing_mode&quot;: &quot;heatmap&quot;,
    &quot;polling_interval&quot;: 10000
  },
  &quot;features&quot;: {
    &quot;streaming&quot;: {
      &quot;_comment_enable&quot;: &quot;set to true after abi scan&quot;,
      &quot;enable&quot;: true,
      &quot;_comment_trace&quot;: &quot;set to true after abi scan&quot;,
      &quot;traces&quot;: true,
      &quot;deltas&quot;: false
    },
    &quot;tables&quot;: {
      &quot;proposals&quot;: true,
      &quot;accounts&quot;: true,
      &quot;voters&quot;: true,
      &quot;permissions&quot;: true,
      &quot;user_resources&quot;: false
    },
    &quot;contract_state&quot;: {
      &quot;contracts&quot;: {}
    },
    &quot;index_deltas&quot;: true,
    &quot;index_transfer_memo&quot;: true,
    &quot;index_all_deltas&quot;: true,
    &quot;deferred_trx&quot;: false,
    &quot;failed_trx&quot;: false,
    &quot;resource_limits&quot;: false,
    &quot;resource_usage&quot;: false
  },
  &quot;prefetch&quot;: {
    &quot;read&quot;: 50,
    &quot;block&quot;: 100,
    &quot;index&quot;: 500
  },
  &quot;hub&quot;: {
    &quot;enabled&quot;: false,
    &quot;instance_key&quot;: &quot;&quot;,
    &quot;custom_indexer_controller&quot;: &quot;&quot;
  },
  &quot;plugins&quot;: {},
  &quot;alerts&quot;: {
    &quot;triggers&quot;: {
      &quot;onApiStart&quot;: {
        &quot;enabled&quot;: true,
        &quot;cooldown&quot;: 30,
        &quot;emitOn&quot;: [
          &quot;http&quot;
        ]
      },
      &quot;onIndexerError&quot;: {
        &quot;enabled&quot;: true,
        &quot;cooldown&quot;: 30,
        &quot;emitOn&quot;: [
          &quot;telegram&quot;,
          &quot;email&quot;,
          &quot;http&quot;
        ]
      }
    },
    &quot;providers&quot;: {
      &quot;telegram&quot;: {
        &quot;enabled&quot;: false,
        &quot;botToken&quot;: &quot;&quot;,
        &quot;destinationIds&quot;: [
          1
        ]
      },
      &quot;http&quot;: {
        &quot;enabled&quot;: false,
        &quot;server&quot;: &quot;http://localhost:6200&quot;,
        &quot;path&quot;: &quot;/notification&quot;,
        &quot;useAuth&quot;: false,
        &quot;user&quot;: &quot;&quot;,
        &quot;pass&quot;: &quot;&quot;
      },
      &quot;email&quot;: {
        &quot;enabled&quot;: false,
        &quot;sourceEmail&quot;: &quot;sender@example.com&quot;,
        &quot;destinationEmails&quot;: [
          &quot;receiverA@example.com&quot;,
          &quot;receiverB@example.com&quot;
        ],
        &quot;smtp&quot;: &quot;smtp-relay.gmail.com (UPDATE THIS)&quot;,
        &quot;port&quot;: 465,
        &quot;tls&quot;: true,
        &quot;user&quot;: &quot;&quot;,
        &quot;pass&quot;: &quot;&quot;
      }
    }
  }
}</pre><!--kg-card-end: markdown--><p></p><p>Of course, you&apos;ll want to change some of those fields to match your own entity&apos;s upstream API server address, name, logo, etc. In the json file above, take note of the fields commented which must be changed after the first ABI synchronization phase completes. To be clear, the example above is configured for the ABI sync process, which must happen before the indexing phase. the fields below the lines starting with <code>_comment</code> are the boolean variables which need to inverted after the ABI scan is completed. </p><p>Before starting the ABI scan, you ought to test that worked correctly:</p><p></p><!--kg-card-begin: markdown--><pre>
./hyp-config chains list
./hyp-config chains test proton
</pre><!--kg-card-end: markdown--><p><strong>Creating Another ZFS Partition For Elasticsearch&apos;s Data</strong></p><p>When my node was about 60% synced, I noticed that my main file system was going to run out of space. </p><p></p><!--kg-card-begin: markdown--><pre>
eos@hyperion:~/hyperion-history-api$ df                                                                  
Filesystem                1K-blocks      Used  Available Use% Mounted on                                                                                                                                           
tmpfs                       6543464      1652    6541812   1% /run                                                                                                                                                 
/dev/md3                  919652204 508491096  364371684  59% /                                          
tmpfs                      32717304         0   32717304   0% /dev/shm                                   
tmpfs                          5120         0       5120   0% /run/lock                                  
/dev/md2                    1011148    137932     804508  15% /boot                                      
/dev/nvme1n1p1               522984      5120     517864   1% /boot/efi                                                                                                                                            
tmpfs                       6543460         4    6543456   1% /run/user/1000                                                                                                                                       
datavolume               2781099904       128 2781099776   1% /data/hyperion                                                                                                                                       
datavolume/blocks        2911697536 130597760 2781099776   5% /data/hyperion/blocks                      
datavolume/state-history 3494311552 713211776 2781099776  21% /data/hyperion/state-history   
</pre><!--kg-card-end: markdown--><p>Depending on the unique hardware setup that you have, now may be a good time to do what I just did and moved my elastic data from <code>/var/lib/elasticsearch</code>, which was on my server&apos;s core file system which is only 1 terabyte to another ZFS data volume that I created once I realized that with each batch of blocks I indexed, my internal file system&apos;s available space was declining while my <code>/data</code> disc, which is about 4 terabytes, stayed the same size. Out of all of the components required to run this gigantic system, Elasticsearch is far the most resource hungry. </p><p>If you are running a dedicated server that has one huge disc, you don&apos;t need to worry about this. However, if you set up a server with one smaller and one larger raid array like I did, you may want to tell Elasticsearch to store it&apos;s database on the larger disc. I ended up doing this when I was about 60% through the batched block index process, so I had to create another volume and then use rsync to copy all the existing data to that volume. If you haven&apos;t started indexing yet, skip that part*.</p><p></p><!--kg-card-begin: markdown--><pre>
sudo zfs create datavolume/elasticsearch
sudo zfs set atime=off datavolume/elasticsearch
sudo zfs set recordsize=16K datavolume/elasticsearch
sudo zfs set compression=lz4 datavolume/elasticsearch
sudo zfs set primarycache=metadata datavolume/elasticsearch
sudo zfs set logbias=throughput datavolume/elasticsearch
sudo chown -R elasticsearch:elasticsearch /data/hyperion/elasticsearch
sudo chmod 750 /data/hyperion/elasticsearch
# Make sure elastic is NOT running
sudo systemctl stop elasticsearch
# Double check
sudo systemctl status elasticsearch
# Copy all existing indices, logs, and metadata if required *
sudo rsync -aHAX --info=progress2 /var/lib/elasticsearch/ /data/hyperion/elasticsearch/
</pre><!--kg-card-end: markdown--><p>Edit <code>/etc/elasticsearch/elasticsearch.yml</code> and change <code>path.data</code> from <code>/var/lib/elasticsearch</code> to &#xA0; &#xA0;<code>/data/hyperion/elasticsearch</code>.</p><p></p><!--kg-card-begin: markdown--><pre>path.data: /data/hyperion/elasticsearch</pre>
<!--kg-card-end: markdown--><p>Now start Elasticsearch and make sure that everything works correctly. </p><p></p><!--kg-card-begin: markdown--><pre>
sudo systemctl start elasticsearch
sudo journalctl -u elasticsearch -f
</pre><!--kg-card-end: markdown--><p></p><p>I haven&apos;t deleted the contents of <code><code>/var/lib/elasticsearch</code></code> just yet, but I did just start another batch and everything seems to be working fine so far, so, at some point I will end up <code>rm -f</code>&apos;ing that directory to get back all that precious disc space. (Which as of this moment, which is after the ABI scan process appears to be about 375 gigabytes with my database indexed to from 200000000, and a current latest block of 348559941. I am currently running a batch from 200000000 to 250000000, which is a potentially dicey range, so, and by the way, I also disabled the <code>live_reader</code> boolean parameter in the <code>proton-chain.json</code> config file, because I realized that I may as well wait till I am synced to the latest block before indexing currently produced blocks in order to conserve system resources. Don&apos;t worry about this yet, first , you need to &#xA0;do the ABI scan.)</p><p><strong>The ABI Indexing Process</strong></p><p>Assuming that no errors are thrown, you should be good to begin the ABI indexing process. </p><!--kg-card-begin: markdown--><pre>
./run.sh proton-indexer
</pre><!--kg-card-end: markdown--><p>Currently, I am waiting for that process to finish. I will continue updating this after it does.</p><p><strong>12 Hours Later</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.luminaryvisn.com/content/images/2025/10/Screenshot-from-2025-10-29-13-30-26.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="1899" height="940" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/10/Screenshot-from-2025-10-29-13-30-26.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/10/Screenshot-from-2025-10-29-13-30-26.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/10/Screenshot-from-2025-10-29-13-30-26.png 1600w, https://blog.luminaryvisn.com/content/images/2025/10/Screenshot-from-2025-10-29-13-30-26.png 1899w" sizes="(min-width: 720px) 720px"><figcaption></figcaption></figure><p><strong>The Long Process of Batched Indexing</strong></p><p>Now, according to the folks in the block producer telegram channel, I must change those JSON fields that I noted in the config I pasted above to their opposite boolean parameters (perhaps except <code>live_reader</code>) and run this again, and again ... and again. I&apos;ve been encouraged by the guys in the chat is to do this next run in batches, by adjusting the start block and end block like 0-10000000, 10000000-20000000, etc. </p><p>I was told that this part would take a <em>long</em> time. Anyway, here&apos;s what that looks like: </p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/10/image-2.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="1903" height="953" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/10/image-2.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/10/image-2.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/10/image-2.png 1600w, https://blog.luminaryvisn.com/content/images/2025/10/image-2.png 1903w" sizes="(min-width: 720px) 720px"></figure><p><br>This seems to be going fast, so maybe I&apos;ll increase that block interval, or better yet, write a shell script to handle this. After only 10 minutes or so:</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/10/image-3.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="1903" height="953" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/10/image-3.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/10/image-3.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/10/image-3.png 1600w, https://blog.luminaryvisn.com/content/images/2025/10/image-3.png 1903w" sizes="(min-width: 720px) 720px"></figure><p>So, you&apos;ll need to run this, increase the <code>start_block</code> and <code>end_block</code> fields in the <code>chains/proton-chain.json</code> incrementally with a range that suits the requirements of your system, and then rinse and repeat until you get to the head block. You can find the head block by querying your nodeos endpoint:</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/10/image-7.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="759" height="193" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/10/image-7.png 600w, https://blog.luminaryvisn.com/content/images/2025/10/image-7.png 759w" sizes="(min-width: 720px) 720px"></figure><p>By the way, I believe that it is best to not enable <code>live_reader</code> until after you have all of the historical blocks indexed. Note that you may see messages like &quot;No blocks are being processed, please check your state history node&quot; when the <code>live_reader</code> is disabled:</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/11/image-4.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="1848" height="945" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/11/image-4.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/11/image-4.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/11/image-4.png 1600w, https://blog.luminaryvisn.com/content/images/2025/11/image-4.png 1848w" sizes="(min-width: 720px) 720px"></figure><p>You will know when you see the &quot;parallel workers finished the requested range&quot; messages that a batch has completed, as seen in the image below. <br></p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/10/image-4.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="1723" height="397" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/10/image-4.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/10/image-4.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/10/image-4.png 1600w, https://blog.luminaryvisn.com/content/images/2025/10/image-4.png 1723w" sizes="(min-width: 720px) 720px"></figure><p>It&apos;s never a bad idea to check RabbitMq before stopping the indexer. When it&apos;s indexing to Elasticsearch, you&apos;ll see plenty of activity such as this:</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/10/image-5.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="1628" height="715" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/10/image-5.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/10/image-5.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/10/image-5.png 1600w, https://blog.luminaryvisn.com/content/images/2025/10/image-5.png 1628w" sizes="(min-width: 720px) 720px"></figure><p>After seeing the message logs indicating the workers have stopped, check RabbitMq, you should see much less going on:</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/10/image-9.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="1637" height="458" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/10/image-9.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/10/image-9.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/10/image-9.png 1600w, https://blog.luminaryvisn.com/content/images/2025/10/image-9.png 1637w" sizes="(min-width: 720px) 720px"></figure><p>For the sake of completeness, I&apos;ll add this screenshot which is what a finished batch looks like once the configured range has completed, without new blocks being indexed:</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/11/image-5.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="1912" height="915" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/11/image-5.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/11/image-5.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/11/image-5.png 1600w, https://blog.luminaryvisn.com/content/images/2025/11/image-5.png 1912w" sizes="(min-width: 720px) 720px"></figure><p>When you get to this point, gracefully stop the program with this command, <code>./stop proton-indexer</code>.</p><p>Wait for the <code>stop</code> script to complete, double check the RabbitMq interface, note the last indexed block in the output, as this+1 will be your next <code>start_block</code> if you accidentally shut down prematurely instead of your current <code>end_block</code> that you were targeting + 1. Control + c the output and edit your <code>config/chains/proton-chain.json</code> file to increment the next target block range, and then run the indexer again. When shut down, you won&apos;t see anything on RabbitMq.</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/10/image-10.png" class="kg-image" alt="Proton Hyperion Setup In 2025: Part II" loading="lazy" width="1637" height="458" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/10/image-10.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/10/image-10.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/10/image-10.png 1600w, https://blog.luminaryvisn.com/content/images/2025/10/image-10.png 1637w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><pre>
./run.sh proton-indexer
</pre><!--kg-card-end: markdown--><p>In the second batch, I set my <code>start_block</code> to 10000000 and end block to <code>30000000</code>, effectively doubling the batch size because my system seems to be able to handle it. While indexing, a range of 20000000 blocks, my system&apos;s CPU usage is nearly maxed, and the process takes just a little longer, and the resource consumption gets a little higher with each iteration, due to the ever expanding nature of the blockchain. </p><p>Run through this process. As of right now, if you batch in increments of 20000000, you&apos;ll have to do this ~ 17 times. I am about 50% done the process as of this moment. The system resources become ever-more demanding as you go. Once I got to block </p><p>After this process is completed, you need to start the api with <code>./run proton-api</code>. Finally, configure an nginx reverse proxy with TLS enabled serving https to 127.0.0.1:7000. Configure your DNS like you&apos;re setting up any other web service. Our indexer will be reachable at <a href="https://hyperion.luminaryvisn.com">https://hyperion.luminaryvisn.com</a> as soon as I finish indexing my node, which ought to be within the next few days.</p><p><strong>Some Final Thoughts</strong></p><p>Hyperion isn&apos;t as daunting to set up as it seems. It is however quite a complex process, but once you do it, you&apos;ll see that it&apos;s really not so bad. I got through it between the documentation and blog posts linked in this post, as well the very friendly community in the block producers telegram channel. The EOSphere blogs I somehow missed till a little later in the process and are very useful and more comprehensive than this blog. This blog is specifically for the Proton Network.</p><p>Because this system is so resource intensive and complex, it&apos;s a good idea to run this behind a content distribution network so that you have a WAF in front of it. The internet is still like the wild west, so I would recommend at least using Cloudflare &#x2013; you don&apos;t necessary need to even pay for that, and when configured properly can save you a lot of hassle. In fact, this is arguably one of the best use cases for a CDN, as all data on a blockchain in intended to be public, and CDN&apos;s when used for services that require privacy may pose a risk.</p><p>After I am done indexing, there will probably be a couple of other things that I will need to do before exposing the Hyperion api to the public. If you made it this far, I am confident that you can set up a reverse proxy, but I will write a third post to conclude this series, explaining how to do all of that.</p><p>Additionally, I am certain that there will be maintenance required from time to time, and I am currently assuming that I did everything correctly thus far. If that turns out not to be the case, I&apos;ll update this post or add that to my final post of this series, depending on whichever makes more sense. </p><p>Until then, thank you as always for reading <em>The Luminary Edition</em>, and remember to Vote Luminary Vision! After all, we are now about to be one of a minority of block producers that are providing Hyperion state history resources, which the network is currently in need of more of, as existing services are reported to be strained at the moment. Luminary Vision is investing in the network, because we&apos;re passionate about The Proton Chain. </p><p>PS &#x2013; if you are considering becoming a block producer on Proton, Telos, EOS, or any other chain that uses the EOS stack, you can always hire me to do so. Feel free to reach out to either me or my client on Telegram or by email. You can find us lurking in the Proton community <a href="https://t.me/XPRNetwork">Telegram channels</a>. </p>]]></content:encoded></item><item><title><![CDATA[Proton Hyperion Setup in 2025 Part 1]]></title><description><![CDATA[<p><strong>Scant Documentation</strong></p><p>The <a href="https://github.com/XPRNetwork/xpr.start/blob/master/Proton%20Chain%20-%20Hyperion%20Guide.pdf">documentation</a> that exists explaining how to set up a Hyperion state history node is incredibly outdated, with the last revision being in 2020. I figured that it would be helpful to document the process of setting this up. Surely, that will save someone some serious hassle. </p><p>The</p>]]></description><link>https://blog.luminaryvisn.com/proton-hyperion-setup-2025/</link><guid isPermaLink="false">68f9fe894f03603d5bf5f13d</guid><dc:creator><![CDATA[Chev Young]]></dc:creator><pubDate>Wed, 29 Oct 2025 01:43:28 GMT</pubDate><media:content url="https://blog.luminaryvisn.com/content/images/2025/11/blog_logo_1a-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.luminaryvisn.com/content/images/2025/11/blog_logo_1a-1.png" alt="Proton Hyperion Setup in 2025 Part 1"><p><strong>Scant Documentation</strong></p><p>The <a href="https://github.com/XPRNetwork/xpr.start/blob/master/Proton%20Chain%20-%20Hyperion%20Guide.pdf">documentation</a> that exists explaining how to set up a Hyperion state history node is incredibly outdated, with the last revision being in 2020. I figured that it would be helpful to document the process of setting this up. Surely, that will save someone some serious hassle. </p><p>The intended audience of this article is a Linux administrator with basic knowledge of nodeos, aka LEAP (or SPRING in the case of native EOS). Because the only people reading this will surely already know their way around a POSIX compliant shell, I will skip over providing copy and paste code for some of the really easy stuff like extracting a tarball. If you can&apos;t do that or don&apos;t know what a tarball is, you are on the wrong website. However, if you, like myself, wanted to contribute to the Proton Chain ecosystem by running Hyperion state history node, congratulations, thank you, welcome to the XPR community (<em>never</em> to be confused with X<em>RP</em> &#x2013; blasphemy!!!), and I do hope that this tutorial will save you as much time as it would have saved me.</p><p><strong>System Requirements</strong></p><p>Out of all of the dedicated servers that you&apos;ll need in order to successfully become a block producer on the XPR/Proton chain network, your Hyperion infrastructure will be the most expensive. If you&apos;re not prepared to spend $250+ a month on another dedicated server, just forget it. </p><p>Other than the massive storage needss, the specifications are about the same as your API or BP node, except you&apos;ll want ideally 128 gigs of RAM (to be super future proof), or at least 64 gigs &#x2013; 32 is not quite enough for a state history node, unlike a producer node. </p><blockquote>Interesting note about producer nodes: despite the high RAM requirements, my producer never uses more than 20% of it&apos;s RAM. Even that is rare. Nor does it use more than perhaps 10% of it&apos;s CPU, but good Lord, don&apos;t think that means you can cheap out and downgrade! For whatever reason, your producer needs to be badass as Batman, and I recommend running the most powerful intel CPU you can afford (within reason) for the producer (at LEAST, 8 cores at around 4 gigahertz, 16 is better!).</blockquote><p>You&apos;ll of course want a powerful CPU, preferably an Intel but AMD&apos;s will work, just make sure you have at least 8 physical cores, and really I&apos;d recommend at least 12. In my experience, Intel CPU&apos;s are always going to offer better performance no matter what you&apos;re doing. I highly recommend using an Intel for your producer node, however, you can save a little money and opt for an AMD CPU for your Hyperion node, as it&apos;s not as mission critical as the thing that actually produces the blocks is. </p><p>Finally, you&apos;ll need as much storage as possible. I was told that as of this moment, which is late October of 2025, a Hyperion node takes up 2 TB of space. So you&apos;ll definitely want at least a 4 TB (8 would be even better, as it will only ever grow as time goes on), separate NMVE drive to store your `state-history` and `blocks` directories, as well as at least a 1 TB NVME for your base system, perhaps 2 TB would be better here as well, because you may want space to provide state history archives for download. For the larger disc, make sure you&apos;re using at least RAID-1 and configure ZFS to optimize performance.</p><p><strong>Setting up the separate data volume</strong></p><p>In my case, I had two NVME&apos;s in RAID1 to set up:</p><!--kg-card-begin: markdown--><pre>/dev/nvme2n1
/dev/nvme3n1</pre><!--kg-card-end: markdown--><!--kg-card-begin: markdown--><ul>
<li>Install ZFS<pre>sudo apt-get install zfsutils-linux </pre>
</li>
<li>Locate the Disk 2 device name<pre> lsblk </pre>
</li>
<li>Create ZFS Pool called &quot;datavolume&quot; on device &quot;sdb&quot;<pre> sudo zpool create datavolume mirror /dev/nvme2n1 /dev/nvme3n1</pre></li>
</ul>
<!--kg-card-end: markdown--><p>Verify status</p><!--kg-card-begin: markdown--><pre>
zpool status
</pre>
<pre>
  pool: datavolume
 state: ONLINE
config:
        NAME          STATE     READ WRITE CKSUM
        datavolume    ONLINE
          nvme2n1     ONLINE
          nvme3n1     ONLINE
</pre><!--kg-card-end: markdown--><p>Configure various ZFS settings for optimization</p><!--kg-card-begin: markdown--><ul>
<li>
<p>Enable compression and disable access time updates</p>
<pre> sudo zfs set compression=lz4 datavolume
 sudo zfs set atime=off datavolume</pre>
</li>
<li>
<p>Set ARC caching mode</p>
<pre> sudo zfs set primarycache=all datavolume </pre>
</li>
<li>
<p>Set the main mountpoint</p>
<pre> sudo zfs set mountpoint=/data/hyperion datavolume </pre>
</li>
<li>
<p>Now create sub-datasets</p>
</li>
</ul>
<pre>
sudo zfs create -o mountpoint=/data/hyperion/blocks datavolume/blocks
sudo zfs create -o mountpoint=/data/hyperion/state-history datavolume/state-history
</pre>
<ul>
<li>Disable compression on state-history: <pre> sudo zfs set compression=none datavolume/state-history </pre>
</li>
</ul>
<!--kg-card-end: markdown--><p>Verify status:</p><!--kg-card-begin: markdown--><pre>zfs get mountpoint,compression,atime,primarycache datavolume
zpool status</pre>
<!--kg-card-end: markdown--><p>You should see something like this again:<br></p><!--kg-card-begin: markdown--><pre>
zpool status
NAME        PROPERTY      VALUE           SOURCE
datavolume  mountpoint    /data/hyperion  local
datavolume  compression   lz4             local
datavolume  atime         off             local
datavolume  primarycache  all             local
  pool: datavolume
 state: ONLINE
config:

	NAME         STATE     READ WRITE CKSUM
	datavolume   ONLINE       0     0     0
	  mirror-0   ONLINE       0     0     0
	    nvme2n1  ONLINE       0     0     0
	    nvme3n1  ONLINE       0     0     0

errors: No known data errors

</pre><!--kg-card-end: markdown--><p>The ZFS filesystem will reduce wear on the NVME discs and help conserve space, if I understand correctly. Now, set up nodeos like you normally would. I like to create a new user and place the home directory of that user in /opt/xpMainNet.</p><p><strong>Configure Nodeos (LEAP)</strong></p><!--kg-card-begin: markdown--><ul>
<li>Verify binary&apos;s integrity and then install (recommended)<br>
Import the maintainer keys (run as root)</li>
</ul>
<pre>
cd /tmp
wget https://github.com/arhag.gpg
gpg --import arhag.gpg
wget https://github.com/ericpassmore.gpg
gpg --import ericpassmore.gpg
wget https://github.com/spoonincode.gpg
gpg --import spoonincode.gpg
wget https://github.com/AntelopeIO/leap/releases/download/v5.0.3/leap_5.0.3_amd64.deb
wget https://github.com/AntelopeIO/leap/releases/download/v5.0.3/leap_5.0.3_amd64.deb.asc
gpg --verify leap_5.0.3_amd64.deb.asc leap_5.0.3_amd64.deb
</pre>
<ul>
<li>
<p>You should see several GOOD SIGNATURE messages, indicating the binary&apos;s integrity has been confirmed.</p>
</li>
<li>
<p>Proceed with installation (as root)</p>
</li>
</ul>
<pre>
apt install ./leap_5.0.3_amd64.deb
useradd -m /opt/XPRMainNet -s /bin/bash eos
usermod -a -G sudo eos
passwd eos # set a strong pass. remove sudo from user after we&apos;re done 
</pre>
<ul>
<li>As user eos now</li>
</ul>
<pre>
su eos
cd /opt/XPRMainNet &amp;&amp; git clone https://github.com/XPRNetwork/xpr.start.git ./
</pre><!--kg-card-end: markdown--><ul><li>Now symlink the /data/state-history and /data/blocks to your data directory and proceed as normal like your setting up a non-producing node</li></ul><p>Next, we&apos;ll enable the state history plugin &amp; sync from a backup. Ultimately, you want the entire history of the blockchain on your node. It can take a long time to generate that data from scratch, so ideally you want to download archives of the state-history, block logs , and then finally you need a snapshot that was taken <em>before</em> the height of the block log that matches the state-history. By the way, make sure you delete blocks/reversible from the archive, because that doesn&apos;t happen automatically for some reason. </p><p>You can get a state history archive from bloxprod&apos;s website taken on October 18th, 2025 as of the time I wrote this, and saltant has a snapshot from July, 2025 as well. I downloaded the statehistory.xz and blocks.xz from the aforementioned, respective sites, extracted them into my symlinked ZFS volume directories, as well as the earlier snapshot from July, took a deep breath, and started nodeos with the snapshot flag, and my history node started synchronizing. It only took about 3 hours to catch up to the present. </p><p>Grab the state history and blocks archives with wget or curl:</p><p><a href="https://snapshots.bloxprod.io/mainnet/">Downloads from Bloxprod</a></p><p>Grab the July snapshot with wget or curl:</p><p><a href="https://proton.saltant.io/snapshots">Downloads from Saltant</a></p><p>Extract the archives in their respective directories, data/blocks, data/state-history, and snapshots/snapshot.bin. </p><p>If the resources above don&apos;t work whenever you&apos;re reading this, find us on telegram in the block producers chat and we will assist you. Start nodeos with the --snapshot option. Apparently, it&apos;s no longer necessary to provide the flags that disable replay optimization. </p><p>I recommend configuring everything via the ini file except the snapshot, which must be provided from the command line. Finally, start her up and wait for synchronization to occur.</p><!--kg-card-begin: markdown--><pre>
./start.sh --snapshot snapshots/snapshot.bin
tail -f stderr.txt
</pre><!--kg-card-end: markdown--><p>Ultimately, once synced, you want to query get_info via the api on 127.0.0.1:8000 and make sure the first available block is 1:</p><!--kg-card-begin: markdown--><pre>
curl -k http://localhost:8888/v1/chain/get_info
</pre>
<pre>
{
  &quot;server_version&quot;: &quot;d133c641&quot;,
  &quot;chain_id&quot;: &quot;384da888112027f0321850a169f737c33e53b388aad48b5adace4bab97f437e0&quot;,
  &quot;head_block_num&quot;: 347190879,
  &quot;last_irreversible_block_num&quot;: 347190551,
  &quot;last_irreversible_block_id&quot;: &quot;14b1b517f24dab92b00c097c41f5c887d9b9ee290eddb6cefd8253d69b24ea6f&quot;,
  &quot;head_block_id&quot;: &quot;14b1b65f92f0e8f5e48c732e6687806a2caf4b3cc022ad18ba2ed1bf36aae1bb&quot;,
  &quot;head_block_time&quot;: &quot;2025-10-24T03:51:55.500&quot;,
  &quot;head_block_producer&quot;: &quot;saltant&quot;,
  &quot;virtual_block_cpu_limit&quot;: 200000000,
  &quot;virtual_block_net_limit&quot;: 1048576000,
  &quot;block_cpu_limit&quot;: 200000,
  &quot;block_net_limit&quot;: 1048576,
  &quot;server_version_string&quot;: &quot;v5.0.3&quot;,
  &quot;fork_db_head_block_num&quot;: 347190879,
  &quot;fork_db_head_block_id&quot;: &quot;14b1b65f92f0e8f5e48c732e6687806a2caf4b3cc022ad18ba2ed1bf36aae1bb&quot;,
  &quot;server_full_version_string&quot;: &quot;v5.0.3-d133c6413ce8ce2e96096a0513ec25b4a8dbe837&quot;,
  &quot;total_cpu_weight&quot;: &quot;1074161312000&quot;,
  &quot;total_net_weight&quot;: &quot;1041519891000&quot;,
  &quot;earliest_available_block_num&quot;: 1,
  &quot;last_irreversible_block_time&quot;: &quot;2025-10-24T03:49:11.500&quot;

</pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2025/10/image.png" class="kg-image" alt="Proton Hyperion Setup in 2025 Part 1" loading="lazy" width="1909" height="963" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2025/10/image.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2025/10/image.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2025/10/image.png 1600w, https://blog.luminaryvisn.com/content/images/2025/10/image.png 1909w" sizes="(min-width: 720px) 720px"></figure><p>In the next post of this series on setting up a Hyperion history node for Proton Chain, aka XPR Network, I&apos;ll run through installing and configuring all of the required components. This information is out there, but the only known, complete guild is a PDF file that is at least five years old. Fortunately, Hyperion has descent <a href="https://hyperion.docs.eosrio.io/">instructions</a> for this process. However, as I mentioned, the official instructions for the proton chain are somewhat outdated, thus I&apos;ve been pouring over documentation for and asking a lot of questions in the block producer telegram channel. </p><p>At the moment I am pretty close to initiating the index process for my node. After I tie up a few lose ends and confirm I&apos;ve got everything right, I will retroactively retrace my steps and document them in my next blog post. Thanks for reading.</p>]]></content:encoded></item><item><title><![CDATA[Mitigating an Attack on the Proton Network]]></title><description><![CDATA[<p>It was recently brought to my attention that a handful of block producer nodes on Proton Mainnet &#xA0;(including the one that I administer) were &quot;producing empty blocks&quot;. An empty block is exactly what it sounds like &#x2013; a block that contains no data. Apparently, someone was flooding</p>]]></description><link>https://blog.luminaryvisn.com/were-back/</link><guid isPermaLink="false">67b856c84f03603d5bf5f099</guid><dc:creator><![CDATA[Chev Young]]></dc:creator><pubDate>Fri, 21 Feb 2025 11:06:05 GMT</pubDate><media:content url="https://blog.luminaryvisn.com/content/images/2025/02/43ec8f93-9fa2-4462-9bdf-ab1c33dcd914.webp" medium="image"/><content:encoded><![CDATA[<img src="https://blog.luminaryvisn.com/content/images/2025/02/43ec8f93-9fa2-4462-9bdf-ab1c33dcd914.webp" alt="Mitigating an Attack on the Proton Network"><p>It was recently brought to my attention that a handful of block producer nodes on Proton Mainnet &#xA0;(including the one that I administer) were &quot;producing empty blocks&quot;. An empty block is exactly what it sounds like &#x2013; a block that contains no data. Apparently, someone was flooding the network with invalid transactions. These transactions were reverting and thus were not being included in the blocks produced by the BP&apos;s. </p><p>Normally, this isn&apos;t particularly problematic, but in this case it was because certain BP&apos;s were being overwhelmed by the abnormally high number of invalid transactions, and rather than discarding them and including valid transactions in the block the BP produced, some BP&apos;s were simply discarding <em>all</em> transactions and producing blocks with no transactions included at all. </p><p>The BP&apos;s that were suscpectable to this issue were those that were either running their BP and API nodes on the same server, or, as was in my particular case, seperating the API and producer node by using virtual machines that were hosted on the same bare metal server. </p><p>During the last decade that I&apos;ve been doing Linux administatrion, I&apos;ve never encountered a situation quite like this. I tend to KVM quite a bit, and it has always been a viable solution for nanaging system resources in their own isolated enviroments. But in this case, the sheer scale of the attack on the XPR network was overwhelming the host system. After all, virtual machines share the system resources of their host. </p><p>As you may have assumed, the solution was to spin up another dedicated server and host the block producer on that machine, while migrating the API from KVM and running it straight on the bare metal of the original system. After doing this, &#xA0;we were no longer producing empty blocks. </p><p>If you are configuring a Proton mainnet node, it would be a good move to isolate your producer and API servers physically &#x2013; they must each run on their own bare metal dedicated servers! Doing this will save you from dealing with unforeseen issues.</p><p>Additionally, BP&apos;s were instructed to enable some additional settings in their `config.ini`.</p><!--kg-card-begin: markdown--><p>disable-subjective-p2p-billing = false<br>
disable-subjective-api-billing = false<br>
subjective-account-decay-time-minutes = 60</p>
<!--kg-card-end: markdown--><p>That took care of it! No more empty blocks. </p>]]></content:encoded></item><item><title><![CDATA[Issues After Updating to Latest Commit]]></title><description><![CDATA[<p>I just had a rather frustrating experience. Upon updating to the latest commit, my node would not start. So I had to revert back to the previous build. See asciinema here: </p><figure class="kg-card kg-embed-card"><a href="https://asciinema.org/a/6ew6M7Er3OVIkZreJE0qb5PT5" target="_blank"><img src="https://asciinema.org/a/6ew6M7Er3OVIkZreJE0qb5PT5.png" width="3070"></a></figure><p>But then we noticed that our node was dropping in the leader board stats, so I went to investigate</p>]]></description><link>https://blog.luminaryvisn.com/issues-after-updating-to-latest-commit/</link><guid isPermaLink="false">63192151f9b1ea297f2d98d7</guid><dc:creator><![CDATA[Chev Young]]></dc:creator><pubDate>Wed, 07 Sep 2022 23:05:33 GMT</pubDate><content:encoded><![CDATA[<p>I just had a rather frustrating experience. Upon updating to the latest commit, my node would not start. So I had to revert back to the previous build. See asciinema here: </p><figure class="kg-card kg-embed-card"><a href="https://asciinema.org/a/6ew6M7Er3OVIkZreJE0qb5PT5" target="_blank"><img src="https://asciinema.org/a/6ew6M7Er3OVIkZreJE0qb5PT5.png" width="3070"></a></figure><p>But then we noticed that our node was dropping in the leader board stats, so I went to investigate the situation. </p><p>Turns out that you if you are not running on this latest commit, 1897d5144a7068e4c0d5764d8c9180563db2fe43, &#xA0;(or higher by the time you may be reading this), the network will reject your node and you won&apos;t be able to produce blocks. In order to resolve this, I had to wipe the ~/.near/data directory and then update to the latest commit again. Then restart and good to go.</p>]]></content:encoded></item><item><title><![CDATA[Becoming a Near Validator]]></title><description><![CDATA[<p>Our companies next project is becoming a NEAR validator, so I will be documenting that process. There is currently a &quot;Staking Wars&quot; challenge going on. Those whom win will become validators and may even receive the NEAR that they will be staking.</p><p>Before I begin documenting the process,</p>]]></description><link>https://blog.luminaryvisn.com/becoming-a-near-validator-part-1/</link><guid isPermaLink="false">62e03c1af9b1ea297f2d973b</guid><dc:creator><![CDATA[Chev Young]]></dc:creator><pubDate>Thu, 04 Aug 2022 02:17:07 GMT</pubDate><media:content url="https://blog.luminaryvisn.com/content/images/2022/08/Near-protocol-for-DApps-1024x611-1.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.luminaryvisn.com/content/images/2022/08/Near-protocol-for-DApps-1024x611-1.jpeg" alt="Becoming a Near Validator"><p>Our companies next project is becoming a NEAR validator, so I will be documenting that process. There is currently a &quot;Staking Wars&quot; challenge going on. Those whom win will become validators and may even receive the NEAR that they will be staking.</p><p>Before I begin documenting the process, I would like to draw your attention to some things that initially confused me. Don&apos;t follow <a href="https://near-nodes.io/validator/validator-bootcamp">these</a> instructions. Rather, follow <em><a href="Before I begin documenting the process, I would like to draw your attention to some things that initially confused me. Don&apos;t follow these instructions. Rather, follow these instructions:  https://github.com/near/stakewars-iii/blob/main/challenges">these</a></em> instructions: &#xA0;<br><br><a href="https://github.com/near/stakewars-iii/blob/main/challenges">https://github.com/near/stakewars-iii/blob/main/challenges</a></p><p>The former link talks about using either the testnet or guildnet when you should in fact be using the shardnet. I didn&apos;t initially understand this and so I had to redo the entire thing.</p><p>The first thing you need to do is get a server. The specifications are as follows:</p><!--kg-card-begin: html--><table><thead><tr><th>Hardware</th><th>Recommended Specifications</th></tr></thead><tbody><tr><td>CPU</td><td>x86_64 (Intel, AMD) processor with at least 8 physical cores</td></tr><tr><td>CPU Features</td><td>CMPXCHG16B, POPCNT, SSE4.1, SSE4.2, AVX</td></tr><tr><td>RAM</td><td>16GB DDR4</td></tr><tr><td>Storage</td><td>1TB SSD (NVMe SSD is recommended. HDD will be enough for localnet only)</td></tr></tbody></table><!--kg-card-end: html--><p>Pretty much any modern CPU will have those features so what you should pay particular attention to is the RAM, cpu cores, and storage space. I would recommend using either Debian or Ubuntu for an operating system.<br></p><p>After you have got a server up and running, you can get your test network node up and running. The first thing, as always you ought to do after setting up your ssh keys is update the system.</p><pre><code>sudo apt update &amp;&amp; sudo apt -y upgrade</code></pre><p>After the system is up to date, the first step is to install nodejs. There are several ways of doing this, but I opted to download the binaries because the version 17.xx is specified. I downloaded the latest 17.xx binary from the directory, hosted here: <a href="https://nodejs.org/dist/v17.9.1/">https://nodejs.org/dist/v17.9.1/</a></p><p>Next, we need to install it. Extract the binaries and then copy them to somewhere and add it to your system path.</p><pre><code>tar -xf *.xz
node-v17.9.1-linux-x64.tar.gz
node-v17.9.1-linux-x64 /usr/local/lib/node/nodejs
cat &lt; _EOF_ &gt;&gt; ~/.profile
export NODEJS_HOME=/usr/local/lib/node/nodejs
export PATH=$NODEJS_HOME/bin:$PATH
_EOF_</code></pre><p>Note: ensure that `.profile` is being loaded with your `.bashrc`. If not just add a line to your ~/bash_rc:</p><pre><code>source ~/.profile</code></pre><p>Next, we need to install near-cli. It&apos;s very simple:</p><pre><code>sudo npm install -g near-cli</code></pre><p>Then confirm that it is working: </p><pre><code>near validators current</code></pre><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2022/08/Screenshot-from-2022-08-03-20-02-31.png" class="kg-image" alt="Becoming a Near Validator" loading="lazy" width="1916" height="1080" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2022/08/Screenshot-from-2022-08-03-20-02-31.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2022/08/Screenshot-from-2022-08-03-20-02-31.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2022/08/Screenshot-from-2022-08-03-20-02-31.png 1600w, https://blog.luminaryvisn.com/content/images/2022/08/Screenshot-from-2022-08-03-20-02-31.png 1916w" sizes="(min-width: 720px) 720px"></figure><p><br><br>You should see some output. See screenshot above. Now we have a ton of dependencies to install:</p><pre><code>sudo apt install -y git binutils-dev libcurl4-openssl-dev zlib1g-dev libdw-dev libiberty-dev cmake gcc g++ python3 docker.io protobuf-compiler libssl-dev pkg-config clang llvm cargo python3-pip clang build-essential make awscli ccze jq</code></pre><p>Next, we need to install rust:</p><pre><code class="language-text">export PATH=&quot;$USER_BASE_BIN:$PATH&quot;
curl --proto &apos;=https&apos; --tlsv1.2 -sSf https://sh.rustup.rs | sh</code></pre><p>And then near-core &#x2013; I just went with the latest version available. </p><pre><code class="language-text">source $HOME/.cargo/env
git clone https://github.com/near/nearcore
cd nearcore
git fetch origin --tags
git checkout tags/1.28.0-rc.3 -b mynode
make neard</code></pre><p>After it&apos;s successfully installed, we need to set it up and then run it:</p><p></p><pre><code>./target/release/neard --home ~/.near init --chain-id testnet --download-genesis</code></pre><p>When that completes, we need to replace the configuration file:</p><p></p><pre><code>rm ~/.near/config.json
wget -O ~/.near/config.json https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/testnet/config.json</code></pre><p>Finally we need to download the latest snapshot from ec2:</p><p></p><pre><code>
aws s3 --no-sign-request cp s3://near-protocol-public/backups/testnet/rpc/latest .
LATEST=$(cat latest)
aws s3 --no-sign-request cp --no-sign-request --recursive s3://near-protocol-public/backups/testnet/rpc/$LATEST ~/.near/data</code></pre><p>When that is complete, we can start up the node and let is synchronize with the network:</p><pre><code>cd nearcore
./target/release/neard --home ~/.near run</code></pre><p>After it is synced up (which should not take very long), lets configure near daemon to run as service:</p><pre><code>cat &lt;&lt; __EOF&gt;&gt; &gt;/etc/systemd/system/neard.service
[Unit]
Description=NEARd Daemon Service

[Service]
Type=simple
User=&lt;USER&gt;
#Group=near
WorkingDirectory=/home/&lt;USER&gt;/.near
ExecStart=/home/&lt;USER&gt;/nearcore/target/release/neard run
Restart=on-failure
RestartSec=30
KillSignal=SIGINT
TimeoutStopSec=45
KillMode=mixed

[Install]
WantedBy=multi-user.target

__EOF__

sudo systemctl enable neard
sudo systemctl start neard
sudo systemctl reload neard
# test log functionality
journalctl -n 100 -f -u neard | ccze -A</code></pre><p><br><br>Now we need to create a wallet and then a staking pool. </p><p>To create a wallet, head over to <a href="https://wallet.shardnet.near.org/">https://wallet.shardnet.near.org/</a>. Then you need to login. Simply run:</p><pre><code>near login</code></pre><p></p><p>Open that link in the same browser as the one in which you created and are signed into your wallet. It will look as if it didn&apos;t work, but it did:</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2022/08/4.png" class="kg-image" alt="Becoming a Near Validator" loading="lazy" width="768" height="513" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2022/08/4.png 600w, https://blog.luminaryvisn.com/content/images/2022/08/4.png 768w" sizes="(min-width: 720px) 720px"></figure><p>Finally, type in the name of your account (you.shardnet.near) and then you should be good. </p><p>You probably need to generate a validator key next. Pick a name for your staking pool. I called mine simply &quot;luminaryvision&quot; which is what my username is. Then you need to copy it to correct directory:</p><pre><code>near generate-key &lt;pool_id&gt;
cp ~/.near-credentials/shardnet/YOUR_WALLET.json ~/.near/validator_key.json</code></pre><p>This is the command that I ran to do so:</p><pre><code>near call factory.shardnet.near create_staking_pool &apos;{&quot;staking_pool_id&quot;: &quot;luminaryvision&quot;, &quot;owner_id&quot;: &quot;luminaryvision&quot;, &quot;stake_public_key&quot;: &quot;ed25519:76R8JC14uVckoEJbX13rww5RmZ6GXoKcQtY6mr3ZxTHA&quot;, &quot;reward_fee_fraction&quot;: {&quot;numerator&quot;: 1, &quot;denominator&quot;: 100}, &quot;code_hash&quot;:&quot;DD428g9eqLL8fWUxv8QSpVFzyHi1Qd16P8ephYCTmMSZ&quot;}&apos; --accountId=&quot;luminaryvision.shardnet.near&quot; --amount=2048 --gas=300000000000000</code></pre><p>Now in this example:</p><ul><li>&quot;luminaryvision&quot; is the name of my pool as well as my account name</li><li>The field starting with &quot;ed25519&quot; is my public key.</li><li>The numerator field which I set to 1 is </li><li>Account_id is your account&apos;s id (duh)</li><li>The gas is the fee you want to pay for this tx.</li></ul><p>Afterwards, you need to send a &quot;ping&quot; transaction:</p><pre><code>near call luminaryvision.factory.shardnet.near ping &apos;{}&apos; --accountId luminaryvision.shardnet.near --gas=200000000000000</code></pre><p>If that goes through, then we are almost done! The last thing that this guild will show &#xA0;you is how to check your node to ensure it is configured correctly. Start by looking at your logs:</p><pre><code>journalctl -n 100 -f -u neard | ccze -A</code></pre><p>Next, you can check the version of your node by querying the RPC server:</p><pre><code>curl -s http://127.0.0.1:3030/status | jq .version</code></pre><p>Here are some useful commands and their explanations:</p><pre><code>near view luminaryvision.factory.shardnet.near get_accounts &apos;{&quot;from_index&quot;: 0, &quot;limit&quot;: 10}&apos; --accountId luminaryvision.shardnet.near # check stake and delegation
curl -s -d &apos;{&quot;jsonrpc&quot;: &quot;2.0&quot;, &quot;method&quot;: &quot;validators&quot;, &quot;id&quot;: &quot;dontcare&quot;, &quot;params&quot;: [null]}&apos; -H &apos;Content-Type: application/json&apos; 127.0.0.1:3030 | jq -c &apos;.result.prev_epoch_kickout[] | select(.account_id | contains (&quot;luminaryvision.factory.shardnet.near&quot;))&apos; | jq .reason  # if a validator is kicked, this will tell you why
curl -r -s -d &apos;{&quot;jsonrpc&quot;: &quot;2.0&quot;, &quot;method&quot;: &quot;validators&quot;, &quot;id&quot;: &quot;dontcare&quot;, &quot;params&quot;: [null]}&apos; -H &apos;Content-Type: application/json&apos; 127.0.0.1:3030 | jq -c &apos;.result.current_validators[] | select(.account_id | contains (&quot;POOL_ID&quot;))&apos; # check produced vs expected blocks</code></pre><p>At this point you should be able to see your node on the <a href="https://explorer.shardnet.near.org/nodes/validators">list of validators:</a></p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2022/08/Screenshot-from-2022-08-03-22-15-21.png" class="kg-image" alt="Becoming a Near Validator" loading="lazy" width="1919" height="1062" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2022/08/Screenshot-from-2022-08-03-22-15-21.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2022/08/Screenshot-from-2022-08-03-22-15-21.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2022/08/Screenshot-from-2022-08-03-22-15-21.png 1600w, https://blog.luminaryvisn.com/content/images/2022/08/Screenshot-from-2022-08-03-22-15-21.png 1919w" sizes="(min-width: 720px) 720px"></figure><p>If so, congrats! We have reached the end of the tutorial. One last thing &#x2013; be sure to watch for new pinned posts in the stake wars <a href="https://discord.com/channels/490367152054992913/991851497002381363">channel</a> of the <a href="https://near.chat">discord community</a>.</p><p></p><p>One final thought is you should put your kernels governor into performance mode:</p><pre><code>$ echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
performance</code></pre><p>I hope you found this helpful or interesting.</p>]]></content:encoded></item><item><title><![CDATA[Claiming Rewards]]></title><description><![CDATA[<p>Proton is so mysterious and scary at times. The first time that I tried to claim validator rewards, I thought that I accidentally sent them the wrong account. Thus I figured that a post about how to do this correctly would be useful. </p><h4 id="voter-rewards">Voter Rewards</h4><p>To claim voter rewards, unlock</p>]]></description><link>https://blog.luminaryvisn.com/claiming-rewards/</link><guid isPermaLink="false">62350a3af9b1ea297f2d96e1</guid><dc:creator><![CDATA[Chev Young]]></dc:creator><pubDate>Sat, 19 Mar 2022 00:43:15 GMT</pubDate><media:content url="https://blog.luminaryvisn.com/content/images/2022/03/moneychick_.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.luminaryvisn.com/content/images/2022/03/moneychick_.png" alt="Claiming Rewards"><p>Proton is so mysterious and scary at times. The first time that I tried to claim validator rewards, I thought that I accidentally sent them the wrong account. Thus I figured that a post about how to do this correctly would be useful. </p><h4 id="voter-rewards">Voter Rewards</h4><p>To claim voter rewards, unlock your wallet and run the following command (replacing <em>luminaryvisn</em> with your producer name):</p><pre><code class="language-json">./cleos.sh push transaction &apos;{
  &quot;delay_sec&quot;: 0,
  &quot;max_cpu_usage_ms&quot;: 0,
  &quot;actions&quot;: [
    {
      &quot;account&quot;: &quot;eosio&quot;,
      &quot;name&quot;: &quot;voterclaim&quot;,
      &quot;data&quot;: {
        &quot;owner&quot;: &quot;luminaryvisn&quot;
      },
      &quot;authorization&quot;: [
        {
          &quot;actor&quot;: &quot;luminaryvisn&quot;,
          &quot;permission&quot;: &quot;owner&quot;
        }
      ]
    }
  ]
}&apos;
</code></pre><h4 id="validator-rewards">Validator Rewards</h4><p>Eoseo allows us very granular control over permissions. Here we are creating a new permission called &quot;claimer&quot; using our <em>active</em> key:<br><br></p><pre><code>./cleos.sh set account permission luminaryvisn claimer &apos;{&quot;threshold&quot;:1,&quot;keys&quot;:[{&quot;key&quot;:&quot;EOS83tLDZorE8eQDhKrZdUG21DG1jctGhEJDEKpsfKJ6kjbQHtCjg&quot;,&quot;weight&quot;:1}]}&apos; &quot;active&quot;</code></pre><p>Now, we configure the permission we just created called &quot;claimer&quot; to be able to claim rewards and do nothing else:</p><pre><code>./cleos.sh set action permission luminaryvisn eosio claimrewards claimer</code></pre><p>Finally we are able to claim rewards:</p><pre><code>./cleos.sh system claimrewards luminaryvisn -p luminaryvisn@claimer</code></pre><p>We can only do this every 24 hours and just because the command appears to execute, doesn&apos;t mean that you actually had any rewards to claim.</p><p>I created a <a href="https://gist.github.com/darkerego/3385acd2445a0eef2fed7ed2a563ddde">script</a> to automatically perfrom these actions for you. Set it up as a cron job to run every day. Apparently you can claim every 24 hours plus 1 second, so it may not actually be able to claim on every single run. Note that you may want to use another keypair when you create the permission (for security reasons) &#x2013; at some point, there will be another blog post about that.</p>]]></content:encoded></item><item><title><![CDATA[Deploy on the Main Network]]></title><description><![CDATA[<p>In this series of the Luminary Edition, I will document the process of configuring your main network block producer node. Note that in order to launch a node on the main network, you must successfully run on the test network for at least two weeks. If you&apos;ve been</p>]]></description><link>https://blog.luminaryvisn.com/deploying-on-the-mainnet/</link><guid isPermaLink="false">622d2960f9b1ea297f2d9608</guid><dc:creator><![CDATA[Chev Young]]></dc:creator><pubDate>Wed, 16 Mar 2022 23:25:45 GMT</pubDate><media:content url="https://blog.luminaryvisn.com/content/images/2022/03/Screenshot-from-2022-03-16-19-28-45-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.luminaryvisn.com/content/images/2022/03/Screenshot-from-2022-03-16-19-28-45-1.png" alt="Deploy on the Main Network"><p>In this series of the Luminary Edition, I will document the process of configuring your main network block producer node. Note that in order to launch a node on the main network, you must successfully run on the test network for at least two weeks. If you&apos;ve been approved then congratulations, and I hope you find this blog helpful. This post with cover configuring your block producer node.</p><h4 id="server-specifications">Server Specifications</h4><p>You will need a really powerful server to run this on. The specs that we chose to start with are as follows:</p><ul><li>12 core cpu AMD cpu @3.8 ghz</li><li>64 gigs of ram</li><li>500 gig ssd</li><li>4 TB raid ssd</li></ul><h4 id="installing-eoseo">Installing Eoseo</h4><p>As I mentioned my last post, compiling the software takes forever so I was going to deploy with per-compiled binaries. This was an Ubuntu 21 system.</p><pre><code>root@ns106991:/tmp# dpkg -i eosio_2.0.5-1-ubuntu-18.04_amd64.deb
Selecting previously unselected package eosio.
(Reading database ... 71870 files and directories currently installed.)
Preparing to unpack eosio_2.0.5-1-ubuntu-18.04_amd64.deb ...
Unpacking eosio (2.0.5-1) ...
dpkg: dependency problems prevent configuration of eosio:
 eosio depends on libicu60; however:
  Package libicu60 is not installed.
 eosio depends on libtinfo5; however:
  Package libtinfo5 is not installed.

dpkg: error processing package eosio (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 eosio
</code></pre><p>But I ran into an issue, so I tried to compile it from source. </p><pre><code>mkdir /opt/EOSIO  
cd /opt/EOSIO  

git clone https://github.com/eosio/eos --recursive    
cd eos  

git checkout v2.0.5  
git submodule update --init --recursive   

</code></pre><p>Run the build script ...</p><pre><code>./scripts/eosio_build.sh -P -y
EOSIO Version: 2.0.5
Sun Mar 13 01:18:09 UTC 2022
User: root
Current branch: HEAD
No installation location was specified. Please provide the location where EOSIO is installed.

EOSIO will be installed to: /root/eosio/2.0
=====================================================================================
======================= Starting EOSIO Dependency Install ===========================
Architecture: Linux
OS name: Ubuntu
OS Version: 20.04
CPU cores: 12
Physical Memory: 65G
Disk space total: 3666G
Disk space available: 3476G
 - You must be running 16.04.x or 18.04.x to install EOSIO.
</code></pre><p>On the test network, I was able to use the latest Ubuntu long term release, but the build script insists on using this older version. It is not worth the time to hack it, so just be sure to use Ubuntu 18.04.1.</p><pre><code>./scripts/eosio_build.sh -P -y
./scripts/eosio_install.sh

mkdir /opt/bin
mkdir /opt/bin/v2.0.5
cp /opt/EOSIO/eos/build/programs/nodeos/nodeos /opt/bin/v2.0.5/
cp /opt/EOSIO/eos/build/programs/cleos/cleos /opt/bin/v2.0.5/
cp /opt/EOSIO/eos/build/programs/keosd/keosd /opt/bin/v2.0.5/
ln -sf /opt/bin/v2.0.5 /opt/bin/bin</code></pre><p>This time the software installed okay. At this point I would advise making a full system backup of your server. This will come in handy if you decide to configure a history node in the future. The next step is to download the boilerplate and configure everything in the <code>config.ini</code> file.</p><pre><code>mkdir /opt/ProtonMainNet
cd /opt/ProtonMainNet
git clone https://github.com/ProtonProtocol/proton.start.git ./
</code></pre><p>If you have not already, you need to create a new account for your node. Then you need to edit your config.ini, editing these fields</p><ul><li>server address: p2p-server-address = ENTER_YOUR_NODE_EXTERNAL_IP_ADDRESS:9876</li><li>if BP: your producer name: producer-name = YOUR_BP_NAME</li><li>if BP: add producer keypair for signing blocks (this pub key should be used in regproducer action):<br>signature-provider = YOUR_PUB_KEY_HERE=KEY:YOUR_PRIV_KEY_HERE</li><li>replace p2p-peer-address list with fresh generated on monitor site: <a href="https://monitor.protonchain.com/#p2p" rel="nofollow">https://monitor.protonchain.com/#p2p</a></li><li>Check chain-state-db-size-mb value in config, it should be not bigger than you have RAM:<br>chain-state-db-size-mb = 16384</li></ul><p>Here is what mine looked like after succesful configuration:</p><pre><code>################################################################################
# Proton tools
#
# Created by http://CryptoLions.io
# 
# https://github.com/ProtonProtocol/proton.start
#
#
################################################################################


    ###########--producer--#########################
    #
    agent-name = luminaryvisn
    plugin = eosio::producer_plugin
    producer-name = luminaryvisn
    signature-provider = EOS83tLDZorE8eQDhKrZdUG21DG1jctGhEJDEKpsfKJ6kjbQHtCjg=KEY:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    #
    ############################################################### 

    
    http-server-address = 0.0.0.0:8888
    p2p-listen-endpoint = 0.0.0.0:9876
    p2p-server-address = xxxxxxxxxxxxxxx:9876
    
    chain-state-db-size-mb = 62767 # I have 64 gigs ram so set this just under
    reversible-blocks-db-size-mb = 1024
    
    contracts-console = true
    
    p2p-max-nodes-per-host = 100
    
    chain-threads = 8
    http-threads = 6
    #wasm-runtime = wabt
    http-validate-host = false
    verbose-http-errors = true
    abi-serializer-max-time-ms = 2000

    #produce-time-offset-us = 250000
    last-block-time-offset-us = -300000
    # eosio2.0
    http-max-response-time-ms = 100
    #Only!! for performance eosio 2.0+
    eos-vm-oc-compile-threads = 4
    eos-vm-oc-enable = 1
    wasm-runtime = eos-vm-jit
    #END        
        

    # Safely shut down node when free space
    chain-state-db-guard-size-mb = 128
    reversible-blocks-db-guard-size-mb = 2


    access-control-allow-origin = *
    access-control-allow-headers = Origin, X-Requested-With, Content-Type, Accept
    # access-control-allow-headers =
    # access-control-max-age =
    # access-control-allow-credentials = false


    # actor-whitelist =
    # actor-blacklist =
    # contract-whitelist =
    # contract-blacklist =
    # filter-on =


    # SSL
    # Filename with https private key in PEM format. Required for https (eosio::http_plugin)
    # https-server-address =
    # Filename with the certificate chain to present on https connections. PEM format. Required for https. (eosio::http_plugin)
    # https-certificate-chain-file =
    # Filename with https private key in PEM format. Required for https (eosio::http_plugin)
    # https-private-key-file =

    ###########################################################################
    # State History (For 1.8.0-rc1+ add to start params --disable-replay-opts )
    # plugin = eosio::state_history_plugin
    # state-history-dir = state-history
    # trace-history = true
    # chain-state-history = true
    # state-history-endpoint = 0.0.0.0:8080
    # debug mode info (https://github.com/EOSIO/eos/pull/7298)
    # #trace-history-debug-mode
    ############################################################################

    allowed-connection = any
    
    # allowed-connection = specified
    # peer-private-key = [&quot;!!NEW_KEY_PUB!!&quot;,&quot;!!NEW_KEY_PRIV!!&quot;] #create new key for private peers
    # peer-key = &quot;!![PUBKEY]!!&quot; 
   
   
    max-clients = 150
    connection-cleanup-period = 30
    sync-fetch-span = 2000
    enable-stale-production = false

    
    pause-on-startup = false
    max-irreversible-block-age = -1
    txn-reference-block-lag = 0
    


    # peer-key =
    # peer-private-key =

    plugin = eosio::producer_plugin
    #plugin = eosio::producer_api_plugin
    plugin = eosio::chain_plugin
    plugin = eosio::chain_api_plugin

#p2p-peer-address = 
#p2p-peer-address = proton.cryptolions.io:9876
#p2p-peer-address = proton.eu.eosamsterdam.net:9103
#p2p-peer-address = proton.lynxsweden.org:9576
#p2p-peer-address = p2p-proton.eosarabia.net:9876
#p2p-peer-address = peer1.proton.pink.gg:48011
#p2p-peer-address = proton-p2p.eos.barcelona:9850
#p2p-peer-address = proton.lynxsweden.org:9576
#p2p-peer-address = proton.eosdublin.io:9877
#p2p-peer-address = peer.proton.alohaeos.com:9876
#p2p-peer-address = peer1-proton.eosphere.io:9876
#p2p-peer-address = proton.eosvenezuela.io:9777
#p2p-peer-address = p2p.proton.eostribe.io:19880
#p2p-peer-address = proton.greymass.com:19875
#p2p-peer-address = proton.eosio.cr:9879
p2p-peer-address = proton.cryptolions.io:9876
p2p-peer-address = proton.cryptolions.io:9876
p2p-peer-address = proton.eosdublin.io:9877
</code></pre><p>I also uncommented the producer plugin line. Configure your systems firewall to allow ssh traffic and tcp traffic on ports 8888 and 9876:</p><pre><code>sudo ufw allow ssh
sudo ufw allow tcp/8888
sudo ufw allow tcp/9876
sudo ufw enable</code></pre><p>You should also take additional steps to harden the security of your server before placing the private keys on here. Be sure the change the permissions of the <code>config.ini</code> to read only by user:</p><pre><code>chmod 600 config.ini</code></pre><p>Obviously you ought to be running all of this under a seperate unix user without superuser permissions.</p><p>After configuring the .ini file, you need to start the node and allow it to synchronize. I tried to restore from the latest snapshot to save time, but that gave me a strange error about &quot;snapshot can only used to initialize an empty database&quot; which is odd because whatever database it is refering to ought to be empty as this a brand new install. </p><p>Wait for your node to synchronize. You can check the status like:</p><pre><code>curl http://localhost:8888/v1/chain/get_info | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   796  100   796    0     0   777k      0 --:--:-- --:--:-- --:--:--  777k
{
  &quot;server_version&quot;: &quot;de78b49b&quot;,
  &quot;chain_id&quot;: &quot;384da888112027f0321850a169f737c33e53b388aad48b5adace4bab97f437e0&quot;,
  &quot;head_block_num&quot;: 119682260,
  &quot;last_irreversible_block_num&quot;: 119681926,
  &quot;last_irreversible_block_id&quot;: &quot;07223386041237476f36dd168e92c02dbf10828308ad317bed4642a8d96a4e59&quot;,
  &quot;head_block_id&quot;: &quot;072234d42901571c1a8bb4ce2999b357d288ab1a7d434bb35f4b3a0f40683833&quot;,
  &quot;head_block_time&quot;: &quot;2022-03-16T22:54:22.500&quot;,
  &quot;head_block_producer&quot;: &quot;eosbarcelona&quot;,
  &quot;virtual_block_cpu_limit&quot;: 200000000,
  &quot;virtual_block_net_limit&quot;: 1048576000,
  &quot;block_cpu_limit&quot;: 199900,
  &quot;block_net_limit&quot;: 1048576,
  &quot;server_version_string&quot;: &quot;v2.0.5&quot;,
  &quot;fork_db_head_block_num&quot;: 119682260,
  &quot;fork_db_head_block_id&quot;: &quot;072234d42901571c1a8bb4ce2999b357d288ab1a7d434bb35f4b3a0f40683833&quot;,
  &quot;server_full_version_string&quot;: &quot;v2.0.5-de78b49b5765c88f4e005046d1489c3905985b94&quot;
}
</code></pre><p>To know the synchronization progess, note the head block time. Finally, you can run this command to register as a block producer and you will offially be live:</p><pre><code>eos@ns106991:/opt/ProtonMainNet/protonNode$ ./cleos.sh system regproducer luminaryvisn PUB_K1_83tLDZorE8eQDhKrZdUG21DG1jctGhEJDEKpsfKJ6kjbRQLnCT &quot;luminaryvisn.com&quot; 0
executed transaction: 1cfa23e3bc5c33aca5a546c9f89723b41ec76389043ce120e985c0c229906651  160 bytes  399 us
#         eosio &lt;= eosio::regproducer           {&quot;producer&quot;:&quot;luminaryvisn&quot;,&quot;producer_key&quot;:&quot;EOS83tLDZorE8eQDhKrZdUG21DG1jctGhEJDEKpsfKJ6kjbQHtCjg&quot;,&quot;u...
warning: transaction executed locally, but may not be confirmed by the network yet         ] 
</code></pre><p>The next thing on our list is configuring the <code>bp.json</code> file which tells the network where to find your company logo and other contact details. There is a <a href="https://proton.eosio.online/bpjson">generator</a> which you can use to create it. This needs to be at the root of the domain that you registered. See example: &#xA0;<a href="https://luminaryvisn.com/bp.json">https://luminaryvisn.com/bp.json</a></p><p>You should see your logo and contact details show up on the network <a href="https://proton.eosio.online/block-producers">monitor</a> if this is configured correctly.</p><figure class="kg-card kg-image-card"><img src="https://blog.luminaryvisn.com/content/images/2022/03/Screenshot-2022-03-16-at-19-03-11-EOSIO-network-monitor-real-time-infrastructure-data-for-multiple-chains.png" class="kg-image" alt="Deploy on the Main Network" loading="lazy" width="1641" height="240" srcset="https://blog.luminaryvisn.com/content/images/size/w600/2022/03/Screenshot-2022-03-16-at-19-03-11-EOSIO-network-monitor-real-time-infrastructure-data-for-multiple-chains.png 600w, https://blog.luminaryvisn.com/content/images/size/w1000/2022/03/Screenshot-2022-03-16-at-19-03-11-EOSIO-network-monitor-real-time-infrastructure-data-for-multiple-chains.png 1000w, https://blog.luminaryvisn.com/content/images/size/w1600/2022/03/Screenshot-2022-03-16-at-19-03-11-EOSIO-network-monitor-real-time-infrastructure-data-for-multiple-chains.png 1600w, https://blog.luminaryvisn.com/content/images/2022/03/Screenshot-2022-03-16-at-19-03-11-EOSIO-network-monitor-real-time-infrastructure-data-for-multiple-chains.png 1641w" sizes="(min-width: 720px) 720px"></figure><p>Remember to put your systems kernel in performance mode as well: </p><pre><code>echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
performance
</code></pre><p>That is all for now. Stay tuned for the next series which will go over configuring a history node, claiming rewards, and anything else that comes up.</p>]]></content:encoded></item><item><title><![CDATA[Becoming a Proton Block Producer]]></title><description><![CDATA[<p>In this article I will detail the process of configuring a node on the proton test network. Although there are several guilds on the internet available, I did not find one that was very comprehensive, and spent much time Googling my way through issues. So here I will document the</p>]]></description><link>https://blog.luminaryvisn.com/becoming-a-proton-block-producer/</link><guid isPermaLink="false">622a22b0f9b1ea297f2d94cb</guid><dc:creator><![CDATA[Chev Young]]></dc:creator><pubDate>Fri, 11 Mar 2022 01:10:18 GMT</pubDate><media:content url="https://blog.luminaryvisn.com/content/images/2022/03/proton-blockchain.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.luminaryvisn.com/content/images/2022/03/proton-blockchain.png" alt="Becoming a Proton Block Producer"><p>In this article I will detail the process of configuring a node on the proton test network. Although there are several guilds on the internet available, I did not find one that was very comprehensive, and spent much time Googling my way through issues. So here I will document the process as comprehensively as I can, taking care to note the things that I wish that I knew ahead of time and had to figure out for myself.</p><h2 id="server-specifications">Server Specifications</h2><p>The specifications for running on the test network are not as intense as the main network, however my client and I found that our execution time was too low when we tried starting with a cheaper server. Our first server was reasonably fast - quad core, 8 gigs of ram, but it was not cutting it. Thus we ended up upgrading our server (and there will be a separate post about that process soon). </p><p>The specs we ended up settling on are as follows:</p><ul><li>32 gigs of ram</li><li>Intel 8 core processor at 4.2 ghz</li><li>500 GB SSD</li><li>I added an 8 gig swap file</li><li>one gigabit line</li></ul><p>We got this box from OVH, it was only $120 per month if I recall correctly. </p><h2 id="installation">Installation</h2><p></p><p>At the time of this writing, you want to use an Ubuntu 18 box for compatibility with the software. While I am sure you could get away with running this on any Linux distribution that you&apos;d like, for the sake of simplicity I recommend Ubuntu 18 (LTS release).</p><p>After you deploy your server, you should harden its security. Of course this is more important on the main network than the test network, but you may as well get in this habit now. Turn off SSH password authentication and set up a firewall, at least.</p><p>The installation instructions can be found here:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/ProtonProtocol/proton-testnet.start"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - ProtonProtocol/proton-testnet.start</div><div class="kg-bookmark-description">Contribute to ProtonProtocol/proton-testnet.start development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Becoming a Proton Block Producer"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">ProtonProtocol</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/c61b40d8b9622c88730aacf42cbb5337dfe5b7141a21687c777f3d6c71b40b00/ProtonProtocol/proton-testnet.start" alt="Becoming a Proton Block Producer"></div></a></figure><p>For the most part, the installation instructions are reasonably straightforward. However, I do have a few suggestions to make this go easier.</p><h2 id="the-process">The Process</h2><p>First, do not bother compiling the software yourself. It takes <em>forever and a half.</em></p><p>Let&apos;s begin. SSH to your box as root, and create a new user called &quot;proton&quot; or something. Add to sudo group, grab the latest compiled binaries, and install:</p><pre><code class="language-bash">useradd -d /opt/ProtonTestnet proton
usermod -a -G sudo proton
passwd proton
sudo su proton
mkdir /opt/bin
mkdir /opt/bin/v2.1.0
cp /usr/opt/eosio/v2.1.0/bin/nodeos /opt/bin/v2.1.0/
cp /usr/opt/eosio/v2.1.0/bin/cleos /opt/bin/v2.1.0/
cp /usr/opt/eosio/v2.1.0/bin/keosd /opt/bin/v2.1.0/
ln -sf /opt/bin/v2.1.0/ /opt/bin/bin

</code></pre><p>Now that the software is installed, we need to configure our node.</p><p></p><pre><code>mkdir /opt/ProtonTestnet
cd /opt/ProtonTestnet
git clone https://github.com/ProtonProtocol/proton-testnet.start.git ./

</code></pre><p>We need to create an account. To do that, you need to go to the <a href="https://monitor.testnet.protonchain.com/">proton testnet monitor</a> and follow these instructions: </p><p>Click <a href="https://monitor.testnet.protonchain.com/#createKey" rel="nofollow">&#x201C;Create Keypair&#x201D;</a> button located at the top left of the page, copy and save both public and private key. also you can create key pair using cleos command:</p><p><code>./cleos.sh create key</code><br></p><p>Click <a href="https://monitor.testnet.protonchain.com/#account" rel="nofollow">&#x201C;Create Account&#x201D;</a> at the top left of the page, enter an account name, submit your previously saved public key in both Owner and Active Public Key field, complete the captcha, and hit create.</p><p>Edit the config.ini file:</p><blockquote>server address: p2p-server-address = ENRT_YOUR_NODE_EXTERNAL_IP_ADDRESS:9876</blockquote><blockquote>replace p2p-peer-address list with fresh generated on monitor site: <a href="https://monitor.testnet.protonchain.com/#p2p" rel="nofollow">https://monitor.testnet.protonchain.com/#p2p</a></blockquote><blockquote>Check chain-state-db-size-mb value in config, it should be not bigger than you have RAM:<br>chain-state-db-size-mb = 16384</blockquote><blockquote>your producer name: producer-name = YOUR_BP_NAME<br>signature-provider = YOUR_PUB_KEY_HERE=KEY:YOUR_PRIV_KEY_HERE</blockquote><blockquote>comment out eos-vm-oc-enable and eos-vm-oc-compile-threads (EOSVM OC is not to be used on a block signing node)</blockquote><p>After you have your config.ini setup correctly, you need to create a wallet. The process is pretty straightforward:</p><pre><code>cd /opt/ProtonTestnet/protonNode
./cleos.sh wallet create --file pass.txt
./cleos.sh wallet import
./cleos.sh wallet unlock
cd ../Wallet
./start_wallet.sh</code></pre><p></p><p>Now, you can register as producer ... but actually, before you can do that, you need to request approval to register as a producer. Please see the medium article on how to do this <a href="https://medium.com/@eosusa.michael/creating-multisig-for-proton-regprod-permissions-3cb46b0ea235">here</a>. Keep in mind that this process does not actually seem to broadcast the transaction required to do this &#x2013; rather it generates the command that you can run with cleos to do so. </p><p>When you get the command, you broadcast it like this:</p><pre><code class="language-json">./cleos.sh -u https://protontestnet.greymass.com push transaction &apos;{
  &quot;delay_sec&quot;: 0,
  &quot;max_cpu_usage_ms&quot;: 0,
  &quot;actions&quot;: [
    {
      &quot;account&quot;: &quot;eosio.msig&quot;,
      &quot;name&quot;: &quot;propose&quot;,
      &quot;data&quot;: {
        &quot;proposer&quot;: &quot;luminaryvisn&quot;,
        &quot;proposal_name&quot;: &quot;thanks&quot;,
        &quot;requested&quot;: [
          {
            &quot;actor&quot;: &quot;luminaryvisn&quot;,
            &quot;permission&quot;: &quot;active&quot;
          }
        ],
        &quot;trx&quot;: {
          &quot;max_net_usage_words&quot;: 0,
          &quot;max_cpu_usage_ms&quot;: 0,
          &quot;delay_sec&quot;: 0,
          &quot;context_free_actions&quot;: [],
          &quot;actions&quot;: [
            {
              &quot;account&quot;: &quot;eosio.proton&quot;,
              &quot;name&quot;: &quot;reqperm&quot;,
              &quot;authorization&quot;: [
                {
                  &quot;actor&quot;: &quot;luminaryvisn&quot;,
                  &quot;permission&quot;: &quot;active&quot;
                }
              ],
              &quot;data&quot;: &quot;30B1DBFE9AE9A48E0772656770726F64&quot;
            }
          ],
          &quot;transaction_extensions&quot;: [],
          &quot;expiration&quot;: &quot;2022-03-15T17:57:09.000&quot;,
          &quot;ref_block_num&quot;: 64677,
          &quot;ref_block_prefix&quot;: 1145945814
        }
      },
      &quot;authorization&quot;: [
        {
          &quot;actor&quot;: &quot;luminaryvisn&quot;,
          &quot;permission&quot;: &quot;owner&quot;
        }
      ]
    }
  ]
}&apos;
</code></pre><p>```<br><br>Then you need to wait for your request to be approved. Finally afterwards you can actually register ... </p><pre><code>./cleos.sh system regproducer YOU_ACCOUNT PUBKEY &quot;URL&quot; 0</code></pre><p>Finally, our node needs to be synchronized. Just restore from a snapshot and save yourself a ton of time. You should do this in a <code>tmux</code> session:</p><pre><code>sudo apt -y install zstd
cd /opt/ProtonTestnet/protonNode/snapshots/
wget https://backup.cryptolions.io/ProtonTestNet/snapshots/latest-snapshot.bin.zst
unzstd latest-snapshot.bin.zst
cd /opt/ProtonTestnet/protonNode
./start.sh --snapshot /opt/ProtonTestnet/protonNode/snapshots/latest-snapshot.bin</code></pre><p>Now you are in business. You need to demonstrate that you can run successfully for at least two weeks &#x2013; <strong>UPDATE: You are also required to </strong><em><strong>continuously </strong></em><strong>run your testnet node even after you are approved for the main network!</strong><br><br>Make sure you are active in the telegram channel, and do not be afraid to ask for votes &#x2013; at some point you will want to request that your node be placed in the &quot;top 21&quot; so that you may determine whether it is working correctly or not. </p><p>You can check your execution time and other stats on various testnet monitors such as <a href="https://api.monitor.testnet.protonchain.com">https://api.monitor.testnet.protonchain.com</a>. Keep in mind that your execution time needs to be below 35 milliseconds.</p><p>Make sure that you set up your <code>bp.json</code> file so that your avatar and other information is shown on the monitors. There is a <a href="https://proton.eosio.online/bpjson">handy generator available</a> that you can use. Place it at the root of your entity&apos;s domain. For example, ours is here:</p><p><a href=" https://luminaryvisn.com/bp.json ">https://luminaryvisn.com/bp.json</a></p><p>That&apos;s pretty much it! With a little luck, eventually you will be approved to go actually produce blocks on the main network, as we just were!</p><p></p><h2 id="final-considerations">Final Considerations</h2><p>The first thing that you should do is <em>back up your keys!</em> You will need them in the event that there is something wrong with your node and you need to unregister as producer. I would advise backing them up in multiple locations, both locally on a USB drive and remotely, encrypted in the cloud.</p><p>Stay tuned for my next article in this series, which details the process of configuring a node for the main network.</p>]]></content:encoded></item></channel></rss>