1. fellow

    fellow Member

    BURST, which is based (kinda forked) on NXT, has a transaction count limit of 255 transactions each block.
    I monitor the BURST chain with a mysql backend and about .5% of all blocks reached the limit while the average transactions per block are only 0.069 at 4 minute target blocktime.

    As soon as there are information about MicroCash transactions available I plan to develop a similar framework.
    Here is some mysql sample output for the numbers above:

    mysql> select count(txids) from blocks where txids='255';
    +--------------+
    | count(txids) |
    +--------------+
    | 1045 |
    +--------------+
    1 row in set (0.01 sec)

    mysql> select count(txids)/(select sum(txids)from blocks) from blocks;
    +---------------------------------------------+
    | count(txids)/(select sum(txids)from blocks) |
    +---------------------------------------------+
    | 0.0690 |
    +---------------------------------------------+
    1 row in set (0.07 sec)



    mysql> select * from blocks order by height desc limit 1;
    +----------+--------+-------+---------+--------+----------------------------+----------+-------+
    | idblocks | height | scoop | target | reward | miner | time | txids |
    +----------+--------+-------+---------+--------+----------------------------+----------+-------+
    | 201432 | 201432 | 2320 | 4636412 | 3972 | BURST-AVY4-MLAT-MQNZ-FS3BS | 48714607 | 4 |
    +----------+--------+-------+---------+--------+----------------------------+----------+-------+
    1 row in set (0.00 sec)
     
  2. Ryler Sturden

    Ryler Sturden Member Staff Member MC Developer

    That is very interesting. Whilst a block may have a transaction limit that doesn't necessarily mean it can scale that high. You have to actually run transaction tests with the actual code to get a real number surely? Otherwise you could have a max block size of 100MB and say you can do a million transactions per block. If you catch my drift?
     
  3. fellow

    fellow Member

    There are several different types of transactions possible which may require all a different amounts of system resources.
    Since the devs have choosen java and to rely on external dependencies there are many resource consuming tasks hidden in reformatting strings, casting to new object types and queries to the blockchain database to get information where needed in place (eg.: https://github.com/BurstProject/burstcoin/blob/master/src/java/nxt/TransactionImpl.java#L623).

    As there exist blocks first the mined block gets verified and then the included transactions get processed.
    The transaction verification itself relies on the used crypto library (https://github.com/BurstProject/bur...56652a1e5/src/java/nxt/crypto/Crypto.java#L97).
    A benchmark of this lib should give a theoretical limit for the possible transaction performance. In terms of runtime I assume the overhead behind cuts the theoretical limit down to 5-10%.

    Beside the regular coin related transactions the CYAM AT technology is integrated (http://www.ciyam.org/at/) which runs with a "virtual hardware instruction set" inside BURST to execute assembler like code autonomously on the blockchain to base smart contracts on it.
    Currently the total count of such called ATs is quite low but for the execution is payed in BURST for executed lines of code each time.
    I do not think this will have a high performance impact if you look at it as whole since you require to feed the AT with coins which are currently given back to the network as rewards to the miner. So this way every network member does some sort of additional work which may only become a bottleneck on large scale usage.
     
  4. Ryler Sturden

    Ryler Sturden Member Staff Member MC Developer

    It is such a shame that we have all these projects that are barely having their features used. It seems like there are 10 developers for every user nearly. :) Maybe it is just a timing thing.
     
  5. fellow

    fellow Member

    Not sure if anyone is interested in details of the proof of capacity mining system burst uses, but lately I tried to find ways how capacity can be "cheated" and figured out a pseudo capacity improvement method which can give you a mining advantage for certain blocks.

    First of all I want to mention that this does not break the idea behind PoC. What it can do is it may give you an advantage for certain blocks compared to the regular mining in terms of finding the lowest deadline with your storage pool.

    PoC writes such called nonces into plotfiles onto your storage and reads a minor fraction from it defined by the last block. There is no merkle root nor any transactions a miner could influence. The miner can only decide to submit a block finding nonce for his account to the network or not to submit.

    The creation process of a plotfile is quite resource consuming compared to mining with it. This is because the last block combined with your account says which part of a nonce is valid for solving the block. These parts are called scoop and have a size of 64 8 bit values. To calculate one of these 64 byte called scoop you have to calculate all 4096 scoops which are contained in a nonce. By design there is no way to only caclulate scoop X for a nonce. For mining you precalculate many nonces for your account and the miner reads only 1 scoop from each. Since most storage hardware io operations are optimized for larger continuous reads the plotfiles are organized in a way which lets read you more than 64 byte without seeking to the next position in the plotfile.

    The todays out of the box public available plotting and mining software requires you to keep all 4096 scoops because they test the deadlines for your account against any block.
    Now statistically there is no difference if you test a certain amount of random data against other random data. With enough tries you will always find a block.
    The design of how a block is solved is slightly different to bitcoin and other coins i know. Since you cannot dynamically increase the amount of nonces your plotfiles store or influence in any way for what is looked (merkle root) a deadline based on your nonce-scoop combination is calculated. If it becomes 0 you can submit your block to the network and get the mining reward. This design enables you to create plots which can only mine certain scoops but provide you more nonces with the same storage size resulting in a higher chance to find low deadlines.
    Finally your miner idle if you do not have stored the scoop for the current block and the total tested nonces over a period of time may be similar but due to the deadline based block winning this advantage increases your chances to win a block.

    I am currently working on linux based miner and plotting mod which stores one file for each scoop and you decide which scoops to keep.
    If anyone is interested in I can post it here when it is ready.
     
  6. Ryler Sturden

    Ryler Sturden Member Staff Member MC Developer

    So you found an exploit basically? :)
     
  7. fellow

    fellow Member

    I would not call this exploit since you actually have to precalculate and store the nonces you mine with and you cannot generate them on the fly.
    The only difference is that it enables you to throw scoops away and you deal with smaller file sizes. What you can archive with this is that you can decide to remove your chances for certain blocks and increase your chances for other blocks in a linear fashion since the scoops are generally distributed quite evenly.

    I had no time to integrate my filelayout into a miner yet but I still can plot with it.
    What it does is quite simple. It works with any regular plotting tool and requires you to initialize it by having a folder with one manually plotted file starting with nonce 0.
    This file has to have the same amount of nonces as your stagger is set to.
    Then you run the scoopsplitter script and this creates a subfolder for the found account id if the id folder does not already exist. It creates subfolders for each scoop with an empty scoop_0_0 file in it.
    After this is done the scoopfile's nonces get extracted from your plotfile and your scoopfile gets renamed corresponding to the amount of added nonces.
    Currently there are no checks wether everything worked or not which means if you interrupt the script it trashes your unfinished plots.

    For some reason the forum does not allow to upload the files since it detects they are scripts (changing the extension did not help) so I pasted the scoopsplitter below.
    There are two more scripts to automatically plot with (a plotter script which plots the next required nonces and a loop script) but these have no special content and I plan to bundle them in an archive as soon as I had some time to integrate this nonce layout into a miner.

    The scoopsplitter script uses dd to extract the binary content and should work without any special dependencies (awk, dd, bc and grep which should be available on most systems). Currently it only works if exactly one plotfile with the correct nonce content exist in the folder from where it is started. If you copy paste the script there may be some special chararcters converted by the forum which may break it. look for "`" and replace it after pasting:

    files=`ls *_*_*_*|grep _`
    echo $files
    burstid=`echo $files| awk -F'_' '{print $1}'`
    startnonce=`echo $files| awk -F'_' '{print $2}'`
    nonce=`echo $files| awk -F'_' '{print $3}'`
    stagger=`echo $files| awk -F'_' '{print $4}'`
    echo "id: $burstid start: $startnonce nonce: $nonce stagger: $stagger"
    if [ ! -d "$burstid" ]; then
    mkdir $burstid
    fi

    for i in {0..4095}; do
    if [ ! -d "$burstid/$i" ]; then
    mkdir $burstid/$i
    touch $burstid/$i/scoops_0_0
    fi
    echo "extracting scoop: $i....$burstid_$startnonce_$nonce_$stagger"
    offset=`echo "($i)*$nonce*64"|bc`
    bs=`echo "$nonce*64"|bc`
    oldfile=`find ./$burstid/$i/|grep scoops`
    lastnonce=`echo $oldfile| awk -F'_' '{print $3}'`
    newnonce=`echo "$lastnonce+$nonce"| bc`
    nonce_to_add=`echo "$startnonce-$nonce+$lastnonce"|bc`
    if [ $(echo "$lastnonce==$startnonce"|bc) -gt 0 ]; then
    dd if=$files bs=$bs skip=$i count=1 status=none >> ./$burstid/$i/scoops_0_$lastnonce
    mv ./$burstid/$i/scoops_0_$lastnonce ./$burstid/$i/scoops_0_$newnonce
    fi

    done

    rm -f $files
     
    Last edited: Apr 23, 2016
  8. fellow

    fellow Member

    I just modded the mdcct (https://github.com/Mirkic7/mdcct) miner tool to work with the scoop arranged plotfiles and it finds deadlines:

    scoop: 3106 31 MB read/125 GB total/deadline 926954s (926879s left)
    New block 222193, basetarget 3182355
    scoop: 3534 31 MB read/125 GB total/deadline 17543395s (17543375s left)

    There had only to be a few lines of code to be adjusted but I still have to tokenize the account number from the plotfile string. In my test version it is coded static at the moment.
     
  9. fellow

    fellow Member

    BURST brought up a new website with a roadmap and some announcements.

    The topics read exciting in terms of collaboration with other coin development teams like KORE, XQN, QORA and "Federated" at the moment.
    About the last one I never heard anything but it may be the most interesting one since it is setup as group of developers who work on "same goals", "share services" and things like that.

    Check it out here: http://www.burst.press/

    There have also been two videos with currently about 100k views each released:

     
    Last edited: Jun 11, 2016

Share This Page