England to Issue National Stablecoin

This March, Bank of England has published the discussion paper called “Central Bank Digital Currency”.

The paper considers: as the issuer of the safest and most trusted form of money in the economy, should we innovate to provide the public with electronic money — or Central Bank Digital Currency (CBDC) — as a complement to physical banknotes?

A CBDC could provide households and businesses with a new form of central bank money and a new way to make payments.

CBDC could also be designed in a way that contributes to a more resilient, innovative and competitive payment system for UK households and businesses.

A Central Bank Digital Currency would be an innovation in both the form of money provided to the public and the payment infrastructure on which payments can be made.

CBDC could present a number of opportunities for the way that the Bank of England achieves its objectives of maintaining monetary and financial stability.

It could support a more resilient payments landscape.

CBDC may also provide safer payment services than new forms of privately issued money‑like instruments, such as stablecoins.

Common reproaches against banksters are primitive and incorrect

Crypto-Reddit is full of accusations to banks. They say banksters issue currencies secured with nothing, with zero scarcity parameter. True. Yet, this is not the core problem they create (which bitcoin does not solve).

It is easier to produce than to sell. Ask an entrepreneur who has managed to create something.

It wasn’t always so. At some point about 40 years ago consumers in the Rich West have tasted everything [reasonable] they ever wanted. The global financial oligopoly developed a well-educated intention: 

“We should influence both the production side and the consumer side.”

The former was already a traditional tool. Bankers and the connected elite have always managed things by choosing what businesses to support. The latter was an innovation back in the early eighties but today most of the things people buy they buy on credit. Thus, it is not the goods that compete but rather credit terms. The free market is gone, as Mr. Marx has warned. Capitalism gave way to imperialism.

If you remove ubiquitous credit from our lives, people would buy other products and other brands. Entire industries that sell hollow status items, as well as entertainment, would die out.

Both production and “compulsion to consume” got concentrated in the same hands. The vicious circle of manipulation spins faster and faster. Money goes to those businesses that provide more opportunities for manipulating consumption. To the detriment of everything else, half of the world is now designing, photographing, marketing, and blogging 24/7. Another half spends hours a day consuming pointless content. Millions of scientists, engineers, and entrepreneurs ended up serving this fuss.

On paper, people consume more than produce and the debt is growing. It is artificial. There are no human beings on this planet who can take hundreds of trillions of real valuables out of their pockets to “lend” them to us. It was us and only us who produced what we have already eaten and otherwise consumed.

The artificial debt was once profitable to its designers but not any longer. The skewed demand has lost momentum. The mismatch of consumer “needs” with common sense has led to the common loss of adequacy. Our civilization remains very fragile. Chaos and death will engulf any city on the territory of the Golden billion if electricity is off for only a couple of days — this is what we must not forget.

It’s high time now to write off the pseudo-debt. Everybody needs it including pseudo-creditors. No one ever bent their back for that money, it has always been an accounting stunt. But accounting is a strict discipline. Manageability in writing off the debt is still possible. But risk-free manageability is not. Not any longer. The problem is over-ripe.

They will take risks. They will try to deploy the MMT. One indicator is that the adviser to Bernie Sanders is Stephanie Kelton who is an ardent advocate of MMT.

There is an international division of labor. Some counties perform the functions of the assembly shop. Others — the processing of raw materials and so on. The US does accounting and management. This division works without clear international agreements, outside public control. It’s some sort of private party. As the essence of the current monetary trap is clear to most players, this division of labor is no longer sustainable. Global financiers are being slowly (but surely) removed from real power. The most important thing in business is personnel, not money.

The US is no longer a trusted currency issuer which MMT must imply. The possible outcomes are scary.

Via Ad-hoc Economy

You don’t care about “The Price of the Internet”. So ignore the price of Bitcoin.

By Beautyon

… the price of buying Bitcoins at the exchanges doesn’t matter for the consumer. 

If the cost of buying a Bitcoin goes to 1¢ This doesn’t change the amount of money that comes out at the other end of a transfer. As long as you redeem your Bitcoin immediately after the transfer into either goods or currency, the same value comes out at the other end no matter what you paid for the Bitcoin when you started the process.

Think about it this way. Let us suppose that you want to send a long text file to another person. You can either send it as it is, or you can compress it with zip. The size of a document file when it is zipped can be up to 87% smaller than the original. When we transpose this idea to Bitcoin, the compression ratio is the price of Bitcoin at an exchange. If a Bitcoin is $100, and you want to buy something from someone in India for $100 you need to buy 1 Bitcoin to get that $100 to India. If the price of Bitcoin is 1¢ then you need 10,000 Bitcoin to send $100 dollars to India. These would be expressed as compression ratios of 1:1 and 10,000:1 respectively.

The same $100 value is sent to India, whether you use 10,000 or 1 Bitcoin.The price of Bitcoins is irrelevant to the value that is being transmitted, in the same way that zip files do not ‘care’ what is inside them; Bitcoin and zip are dumb protocols that do a job. As long as the value of Bitcoins does not go to zero, it will have the same utility as if the value were very ‘high’.

Bearing all of this in mind, it’s clear that new services to facilitate the rapid, frictionless conversion into and out of Bitcoin are needed to allow it to function in a manner that is true to its nature.

The current business models of exchanges are not addressing Bitcoin’s nature correctly. They are using the Twentieth Century model of stock, commodity and currency exchanges and superimposing this onto Bitcoin. Interfacing with these exchanges is non-trivial, and for the ordinary user a daunting prospect. In some cases, you have to wait up to seven days to receive a transfer of your fiat currency after it has been cashed out of your account from Bitcoins. Whilst this is not a fault of the exchanges, it represents a very real impediment to Bitcoin acting in its nature and providing its complete value.

Imagine this; you receive an email from across the world, and are notified of the fact by being displayed the subject line in your browser. You then applyto your ISP to have this email delivered to you, and you have to wait seven days for it to arrive in your physical mail box.

The very idea is completely absurd, and yet, this is exactly what is happening with Bitcoin, for no technical reason whatsoever.

It is clear that there needs to be a re-think of the services that are growing around Bitcoin, along with a re-think of what the true nature of Bitcoin is. Rethinking services is a normal part of entrepreneurialism and we should expect business models to fail and early entrants to fall by the wayside as the ceaseless iterations and pivoting progress.

Bearing all of this in mind, focusing on the price of Bitcoin at exchanges using a business model that is inappropriate for this new software simply is not rational; its like putting a methane breathing canary in a mine full of oxygen breathing humans as a detector. The bird dies even though nothing is wrong with the air; the miners rush to evacuate, leaving the exposed gold seams behind, thinking that they are all about to be wiped out, when all is actually fine.

Bitcoin, and the ideas behind it are here to stay. As the number of people downloading the client and using it increases, like Hotmail, it will eventually reach critical mass and then spread exponentially through the internet. When that happens, the correct business models will spontaneously emerge, as they will become obvious, in the same way that Hotmail, Gmail, Facebook, cellular phones and instant messaging seem like second nature.

In the future. I imagine that very few people will speculate on the value of Bitcoin, because even though that might be possible, and even profitable, there will be more money to be made in providing easy to use Bitcoin services that take full advantages of what Bitcoin is.

One thing is for sure; speed will be of the essence in any future Bitcoin business model. The startups that provide instant satisfaction on both ends of the transaction are the ones that are going to succeed. Even though the volatility of the price of Bitcoin is bound to stabilise, since it has no use in and of itself, getting back to money or goods instantly will be a sought after characteristic of any business built on Bitcoin.

The needs of Bitcoin businesses provide many challenges in terms of performance, security and new thinking. Out of these challenges will come new practices and software that we can only just imagine as they come over the horizon.

Finally, when there is no more fiat, and the chaotic transition zone between fiat and Bitcoin has been abolished, then everything will be priced in Bitcoin, and there will be no volatility, because no one uses anything other than Bitcoin to buy or sell. If you know any chemistry, this will be like a reaction’s reagents reaching equilibrium; you can shake it and stir it all you like; the reaction is over, and you’re left with the inert product. Right now, compared to the amount of fiat in the world, Bitcoin can expand and contract very rapidly over a large range, because it is small in volume. It can expand to what for many is an unimaginably high price, and then shrink down again. As it gets bigger and accumulates more mass (its price expressed in fiat), these fluctuations will become smaller and smaller. Through all of this, Bitcoin remains exactly the same; it is its users that are publishing numbers as a signal to react upon.

Read full article >>

Bloomberg: “end to the era of trust in central banks”

By John Authers

Full article >>

After the war, the developed world was governed by the Bretton Woods accords, which tied all currencies to the dollar, which was in turn pegged to gold. It was a looser form of a gold standard, and survived until 1971. That was when Richard Nixon ended the gold peg, realizing that it had become too great a burden for the U.S., and stood in the way of the expansionary fiscal policy he was hoping to adopt ahead of his re-election campaign. 

The result was a huge shock to the world order. With the gold peg gone, the financial system adopted a new anchor, which was oil. In a book published 10 years ago, I tried describing the system that replaced Bretton Woods as an Oil Standard. Effectively, producers tried to defend themselves against the declining buying power of the dollar by hiking prices, so as to keep the price of oil in gold terms effectively constant. The oil/gold ratio measures how much gold you would need to pay to buy a certain amount of oil. As the chart shows, it ended the 1970s almost exactly where it had started, despite the massive increase in dollar terms.

Compared to gold, oil is already close to a 50-year low

The chart uses Bloomberg’s historic oil prices, which appear monthly, and pre-dates the latest market drama. Once updated, it will show the oil/gold ratio reached an all-time low, having already halved this year:

The economics of oil producers have been transformed in two months

The Oil Standard era ended in the early 1980s. Markets — and everyone else — had lost faith in the ability of central banks to control inflation. Paul Volcker arrived at the Fed, raised rates more than anyone thought he would dare, provoked a recession, and convinced everyone that central banks could control inflation after all. In conjunction with the Reagan/Thatcher approach to economic management, and then the collapse of the Soviet Union and the resurgence of China, that ushered in a quarter-century of triumphalism for a new model anchored by broadly trusted central banks.

That foundered in the financial crisis of 2007-09.  Now we have reached a new juncture, where the fear is that central banks cannot control deflation. For the post-crisis decade, the U.S. has managed to stay distinct, thanks in part to the privilege of the world’s reserve currency, and in part to the superior success of its corporate sector. It has done this even as Japan and Western Europe have sunk into negative interest rates, while the emerging markets have stagnated. The twin shocks of the epidemic and the oil price now appear to have wounded confidence that the U.S. can stand alone.

At first, central banks struggled against inflation; now deflation is the enemy

It certainly looks as though the world has at last arrived at a point that it appeared to have reached a decade ago. Some new financial order, to replace Bretton Woods and the system that Volcker built to replace it, is now needed. A decade of monetary expansion has delayed the issue. It is hard to see how it can be delayed much further. It would be wise to brace for disruption to match what was experienced at the end of the 1970s and the beginning of the 1980s. 

… Read full article >>

On Negative Probability (for Mechanism Design)

By Alexander Kuzmin, Mycelium CEO

Those who work in the development of token projects need to model the behavior of people. Primarily, they study the potential reaction to incentives. Developers try to answer the question of what would a person do if he or she expects such and such a reward?

Our own experience and discussions with colleagues across the industry suggest that most inventors forget or ignore that probabilities can be negative. What is a negative probability and why is it important to use it?

At first glance, a probability of less than zero is nonsense. What could that mean if any person can either do something or not? Is the minimum probability not zero? No, if we consider non-observable and conditional events.

A negative amount of money does not seem strange to us. It can be interpreted simply as a debt. But the negative number of, say, apples is less clear a concept. We have an even less intuitive sense of what a negative number of events is (for calculating probabilities). But there is no fundamental difference with money.

Let’s assume you start your day with five apples. You are expected to receive eight more apples during the day, and you are going to give away ten apples. As a result, you will end the day with three apples. Since the final result is quite real, no questions arise.

The problem, however, is that you consider only a fraction of all possible scenarios. You need to limit your behavior so that you either have apples at any given moment, or you don’t need to give an apple to anyone at all points in time when you don’t have any apples. That means (at least) that “apple-nominated debt” is prohibited.

But if you are allowed to have such debt, or—speaking more generally—you can include in your scenarios negative event probabilities, then the system’s flexibility increases.

Few people can replace apples with events in their thought experiments. Can you imagine a negative probability of an “event of the execution of a specific contract”? Difficult, right? But if you simply allow negative probabilities in your calculations, then your model does the work for you without your imagination having to comprehend this speculative phenomenon.

Why is it important to use negative probabilities? Convenience. Research on motivations without using negative probabilities is the same as arithmetic without numbers less than zero. Possible but extremely inconvenient.

Below is a list of classic studies including those where negative probability is applied to finance.

  • Dirac, P.A.M. The Physical Interpretation of Quantum Mechanics. Proc. Roy. Soc. London (A 180), pp. 1–39, 1942.
  • Khrennikov, A. Equations with infinite-dimensional pseudo-differential operators. Dokl. Academii Nauk USSR, v.267, 6, p.1313–1318, 1982.
  • Khrennikov, A. p-adic probability and statistics. Dokl. Akad. Nauk, v.322, p.1075–1079, 1992.
  • Khrennikov, A. Andrei Khrennikov on Negative Probabilities, in Derivatives, Models on Models, Editor Espen Haug, John Wiley 2007.
  • Khrennikov, A. Interpretations of Probability. Walter de Gruyter, Berlin/New York, 2009.
  • Kolmogorov, A. Grundbegriffe der Wahrscheinlichkeitrechnung, Ergebnisse der Mathematik (English translation: (1950) Foundations of the Theory of Probability, Chelsea P.C.), 1933.
  • Kolmogorov, A. and Fomin, S. Elements of Function Theory and Functional Analysis, Nauka, Moscow, 1989 (in Russian).
  • Kuratowski, K. and Mostowski, A. Set Theory, North Holland P.C., Amsterdam, 1967.

Via Ad-hoc Economy

Germany recognizes Bitcoin as a legal financial instrument

German financial watchdog provides clarity on the legal status of cryptocurrencies.

By Liam Frost

  • Germany has provided legal clarity on the status of Bitcoin and other cryptocurrencies.
  • It has published five characteristics of a cryptocurrencies.
  • Cryptocurrencies are not to be confused with electronic money.

The Federal Financial Supervisory Authority (BaFin) of Germany has officially defined cryptocurrencies as financial instruments, providing further regulatory clarity. This makes it easier for those spending cryptocurrencies and will give some relief to businesses built around them.

According to the translation of BaFin’s press release published on March 2, cryptocurrencies are now classified as the “digital representations of value” that have the following characteristics:

  • Not issued or guaranteed by any central bank or public body;
  • Don’t have the legal status of currency or money;
  • Can be used by individuals or legal entities as a means of exchange or payment;
  • Serve investment purposes;
  • Can be transmitted, stored and traded electronically.

The document also notes that cryptocurrencies are not to be confused with various types of “electronic money” which have other sections of the law dedicated to them.

The new classification was based on definitions written by other regulators worldwide, such as the Financial Action Task Force. BaFin also clarified that prior to this, cryptocurrencies did not fall into any of the generally recognized pre-existing categories in Germany.

Math in Solidity

By Mikhail Vladimirov

Ethereum is a programmable blockchain, whose functionality could be extended by publishing pieces of executable code, known as smart contracts, into the blockchain itself. This distinguishes Ethereum from the first generation of blockchains, where new functionality requires client software to be modified, nodes to be upgraded, and the whole blockchain to be forked.

A smart contract is a piece of executable code published on-chain, that has a unique blockchain address assigned to it. Smart contract controls all the assets belonging to its address and may act on behalf of this address when interacting with other smart contracts. Each smart contract has persistent storage that is used to preserve the smart contract state between invocations.

Solidity is the primary programming language for smart contract development on Ethereum, as well as on several other blockchain platforms that use Ethereum Virtual Machine (EVM).

Programming was always about math, blockchain was always about finance, and finance was about math since ancient times (or maybe math was about finance). Being the primary programming language for Ethereum blockchain, Solidity has to do math well.

In this series, we discuss various aspects of how Solidity does the math, and how developers do math in Solidity. The first topic to discuss is numbers.

Numeric Types in Solidity

In comparison to mainstream programming languages, Solidity has quite many numeric types: namely 5,248. Yes, according to the documentation, there are 32 signed integer, 32 unsigned integer, 2592 signed fixed-point, and 2592 unsigned fixed-point types. JavaScript has only two numeric types. Python 2 used to have four, but in Python 3 type “long” was dropped, so now there are only three. Java has seven and C++ has something about fourteen.

With so many numeric types, Solidity should have proper type for everybody, right? Not so fast. Lets look at these numeric types a bit closer.

We will start with the following question:

Why Do We Need Multiple Numeric Types?

Spoiler: we don’t.

There are no numeric types in pure math. A number may be integer or non-integer, rational or irrational, positive or negative, real or imaginary etc, but these are just properties, the number may or may not have, and single number may have several such properties at once.

Many high-level programming languages have single numeric type. JavaScript had only “number” until “BigInt” was introduced in 2019.

Unless doing hardcore low-level stuff, developers don’t really need multiple numeric types, they just need pure numbers with arbitrary range and precision. However, such numbers are not natively supported by hardware, and are somewhat expensive to emulate in software.

That’s why low-level programming languages and languages aimed at high performance usually have multiple numeric types, such as signed/unsigned, 8/16/32/64/128 bits wide, integer/floating-point etc. These types are natively supported by hardware and are widely used in file formats, network protocols etc, thus low-level code benefits from them.

However, for performance reasons, these types usually inherit all the weird semantics of underlying CPU instructions, such as silent over- and underflow, asymmetric range, binary fractions, byte ordering issues etc. This makes them painful in high-level business logic code. Straightforward usage often appears insecure, and secure usage often becomes cumbersome and unreadable.

So, the next question is:

Why Does Solidity Has So Many Numeric Types?

Spoiler: it doesn’t.

EVM natively supports two data types: 256-bit word and 8-bit byte. Stack elements, storage keys and values, instruction and memory pointers, timestamps, balances, transaction and block hashes, addresses etc are 256-bit words. Memory, byte code, call data, and return data consist of bytes. Most of the the EVM opcodes deal with words, including all math operations. Some of the math operations treat words as signed integers, some as unsigned integers, while other operations just work the same way regardless of whether arguments are signed on unsigned.

So EVM natively supports two numeric types: signed 256-bit integer and unsigned 256-bit integer. These types are known in Solidity as int and uint respectively.

Apart from these two types (and their aliases int256 and uint256) Solidity has 62 integer types int<N>, and uint<N>, where <N> could be any multiple of 8 from 8 to 248, i.e. 8, 16, …, 248. On EVM levels, all these types are backed by the same 256-bit words, but result of every operation is truncated to N bits. They could be useful for specific cases, when particular bit width is needed, but for general calculations these types are just less powerful and less efficient (truncating after every operation is not free) versions of int and uint.

Finally, Solidity has 5184 fixed-point types fixedNxM and ufixedNxM where N is multiple of 8 from 8 to 256 and N is an integer number from 0 to 80 inclusive. These types are supposed to implement decimal fixed-point arithmetic of various range and precision, but as of now (Solidity 0.6.2) the documentation says that:

Fixed point numbers are not fully supported by Solidity yet. They can be declared, but cannot be assigned to or from.

So fixed-point numbers as well as fractions numbers in general are not currently supported.

Then, the next question is:

What If We Need Fraction Numbers or Integers Bigger Than 256 Bit?

Spoiler: you have to emulate them.

One would say, that 256 bit ought to be enough for anybody. However, once most of the numbers in Ethereum are 256 bit wide, even simple sum of two numbers may be as wide as 257 bit, and product of two numbers may be up to 512 bit wide.

Common way to emulate fixed or variable width integers numbers, that are wider, than types natively supported by programming language, is to represent them as fixed or variable length sequences of shorter, natively supported integer numbers. So bit image of wide integer is the concatenation of bit images of shorter integers.

In Solidity, wide integers may be represented as fixed or dynamic arrays whose elements are either bytes or uint vales.

For fractions situation is a bit more complicated, as as there are different flavors of them, each having its own advantages and drawback.

The most basic are simple fractions: just one integer, called “numerator”, divided by another integer, called “denominator”. In Solidity simple fraction could be represented as a pair of two integers, or as a single integer, whose bit image is the concatenation of bit images of numerator and denominator. In the latter case, numerator and denominator has to be of the same width.

Another popular format for fractions is fixed-point numbers. Fixed-point number is basically a simple fraction whose denominator is a predefined constant, usually power of 2 or 10. The former case is known as “binary” fixed-point, while the latter is known as “decimal” fixed-point. As long as denominator is predefined, there is no need to specify is explicitly, so only the numerator need to be specified. In Solidity fixed-point numbers are usually represented as a single integer numerator, while commonly used denominators are 10¹⁸, 10²⁷, 2⁶⁴, and 2¹²⁸.

Yet another well-known format for fraction numbers is floating-point. Basically, floating point number could be described as following: m×B^e, where m (mantissa) and e (exponent) are integers, while B (base) is a predefined integer constant, usually 2 or 10. The former case is known as “binary” floating-point, and the latter case is known as “decimal” floating-point.

IEEE-754 standardizes several common floating-point formats, including five binary formats known as “half”, “single”, “double”, “quadruple”, and “octuple” precision. Each of these formats packs both, mantissa and exponent, into single sequence of 16, 32, 64, 128, or 256 bits respectively. In Solidity, these standard formats could be represented by binary types bytes2bytes4bytes8bytes16, and bytes32. Alternatively, mantissa and exponent could be represented separately as a pair of integers.

And the file question for this section:

Do We Have to Implement All This by Ourselves?

Spoiler: not necessary.

The good news is that there are Solidity libraries for various number formats, such as: fixidity (decimal fixed-point with arbitrary number of decimals), DSMath (decimal fixed-point with 18 or 27 decimals), BANKEX Library (IEEE-754 octuple precision floating-point), ABDK Libraries (binary fixed-point and quadruple precision floating-point) etc.

The bad news is that different libraries use different formats, so it is really hard to combine them. The roots of this problem will be discussed in the next section.

Numeric Literals in Solidity

In the previous section we discussed how numbers are represented at run time. Here we will look at how they are represented at the development time, i.e. in the code itself.

Compared to mainstream languages, Solidity has quite a rich syntax for numeric literals. First of all, good old decimal integers are supported, such as 42. As in other C-like languages, there are hexadecimal integer literals, like 0xDeedBeef. So far so good.

In Solidity, literals may have unit suffix, such as 6 ether, or 3 days. A unit, is basically a factor, the literal is multiplied by. Here ether is 10¹⁸ and days is 86,400 (24 hours × 60 minutes × 60 seconds).

Apart from this, Solidity supports scientific notation for integer literals, such as 2.99792458e8. This is quite unusual, as mainstream languages support scientific notation for fractional literals only.

But probably the most unique feature of the whole Solidity language, is its support for rational literal expressions. Virtually every mature compiler is able to evaluate constant expressions at compile time, so x = 2 + 2 does not generate add opcode, but is rather equivalent to x = 4. Solidity is able to do this as well, but actually, it goes far beyond that.

In mainstream languages, compile-time evaluation of constant expression is just an optimization, so constant expression is evaluated at compile time exactly the same way as it would be evaluated at run time. This makes it possible to replace any part of such expression with named constant or variable holding the same value, and get exactly the same result. However, for Solidity this is not the case.

At run time, division in Solidity rounds result towards zero, and other arithmetic operations wraps on overflow, while at compile time, expressions are evaluated using simple fractions with arbitrary large numerator and denominator. So, at run time, expression ((7 / 11 + 3 / 13) * 22 + 1) * 39 would be evaluated to 39, while at compile time the very same expression is evaluated to 705. The difference is because at run time, 7 / 11 and 3 / 13 are rounded to zero, but at compile time, the whole expression is evaluated in simple fractions without any rounding at all.

Even more interesting, the following expression is is valid in Solidity: 7523 /48124631 * 6397, while this is not valid: 7523 / 48125631 * 6397. The difference is that the former evaluates to integer number, while the latter evaluates to non-integer. Remember, that Solidity do not support fractions at run time, so all literals have to be integer.

While fractional numbers and big integers may be represented in Solidity at run time, as described in the previous sections, there is no convenient way to represent them in the code. This makes any code, that performs operations with such numbers, rather cryptic.

As long as Solidity does not have a standard fixed-point nor floating-point format, every library uses its own, which makes libraries incompatible with each other.


Every time I see +*, or ** doing audit of another Solidity smart contract, I start writing the following comment: “overflow is possible here”. I need a few seconds to write these four words, and during these seconds I observe nearby lines trying to find a reason, why overflow is not possible, or why overflow should be allowed in this particular case. If the reason is found, I delete the comment, but most often the comment remains in the final audit report.

Things aren’t meant to be this way. Arithmetic operators supposed to allow writing compact and easy to read formulas such as a**2 + 2*a*b + b**2. However, this expression would almost definitely raise a bunch of security concerns, and the real code is more likely to look like this:

add (add (pow (a, 2), mul (mul (2, a), b)), pow (b, 2))

Here addmul, and pow are functions implementing “safe” versions of +*, and ** respectively.

Concise and convenient syntax is discouraged, plain arithmetic operators are marginally used (and not more than one at a time), cumbersome and unreadable functional syntax is everywhere. In this article we analyse the problem, that made things so weird, whose infamous name is: overflow.

We Took a Wrong Turn Somewhere

One would say, that overflow was always there, and all programming languages suffer from it. But is this really true? Did you ever see something like SafeMath library implemented for C++, Python, or JavaScript? Do you really think, that every + or * is a security breach, until the opposite is proven? Most probably, your answer for both questions is “no”. So,

Why Overflow in Solidity Is So Much Painful?

Spoiler: nowhere to run, nowhere to hide.

Numbers do not overflow in pure math. One may add two arbitrary large numbers and get precise result. Numbers do not overflow in high-level programming languages such as JavaScript and Python. In some cases the result could fall into infinity, but at least adding two positive numbers may never produce negative result. In C++ and Java integer numbers do overflow, but floating-point numbers don’t.

In those languages, where integer types do overflow, plain integers are used primarily for indexes, counters, and buffer sizes, i.e. for values limited by the size of data being processed. For values, that potentially may exceed range of plain integers, there are floating-point, big integer, and big decimal data types, either built-in or implemented via libraries.

Basically, when the result of an arithmetic operation does not fit into the type of the arguments, there are a few options what compiler may do: i) use wider result type; ii) return truncated result and use side channel to notify the program about overflow; iii) throw an exception; and iv) just silently return truncated result.

The first option is implemented in Python 2 when handling int type overflows. The second option is what carry/overflow flags in CPUs are for. The third option is implemented for Solidity by SafeMath library. The fourth option is what Solidity implements by itself.

The fourth option is probably the worst one, as it makes arithmetic operations error-prone, and at the same time makes overflow detection quite expensive, especially for multiplication case. One needs to perform additional division after every multiplication to be on the safe side.

So, Solidity neither has safe types, one could run to, nor it has safe operations, one could hide behind. Having nowhere to run and nowhere to hide, developers have to meet overflows face to face and fight them all throughout the code.

Then, the next question is:

Why Doesn’t Solidity Have Safe Types Nor Operations?

Spoiler: because EVM don’t have them.

Smart contracts have to be secure. Bugs and vulnerabilities in them cost millions of dollars, as we’ve already learned the hard way. Being the primary language for smart contracts development, Solidity takes security very seriously. If has many features supposed to prevent developers from shooting themselves in the feet. We mean features like payable keyword, type cast limitations etc. Such features are added with every major release, often breaking backward compatibility, but the community tolerates this for the sake of better security.

However, basic arithmetic operations are so unsafe that almost nobody use them directly nowadays, and the situation doesn’t improve. The only operation that became a bit safer is division: division by zero used to return zero, but now it throws an exception, but even division didn’t become fully safe, as it still may overflow. Yes, in Solidity int type division overflows when -2¹²⁷ is being divided by -1, as correct answer (2¹²⁷) does not fit into int. All other operations, namely +-*, and ** are still prone to over- or underflow and thus are intrinsically unsafe.

Arithmetic operations in Solidity replicate the behavior of corresponding EVM opcodes, and making these operations safe at compiler level would increase gas consumption by several times. Plain ADD opcode costs 3 gas. The cheapest opcode sequence for implementing safe add the author of the article managed to find is:

DUP2(3) DUP2(3) NOT(3) LT(3) <overflow>(3) JUMPI(10) ADD(3)

Here <overflow> is the address to jump on overflow. Numbers in brackets are gas costs of the operations, and these numbers give us 28 gas in total. Almost 10 times more, than plain ADD. Too much, right? It depends on what you compare with. Say, calling add function from SafeMath library would cost about 88 gas.

So, safe arithmetic at library or compiler level costs much, but

Why Doesn’t EVM Have Safe Arithmetic Opcodes?

Spoiler: for no good reason.

One would say that arithmetic semantic in EVM replicates that of CPU for performance reasons. Yes, some modern CPUs have opcodes for 256-bit arithmetic, however mainstream EVM implementations don’t seem to use these opcodes. Geth uses big.Int type from the standard library of Go programming language. This type implements arbitrary wide big integers backed by arrays of native words. Parity uses its own library implementing fixed-width big integers on top of native 64-bit words.

For both implementations, additional cost of arithmetic overflow detection would virtually be zero. Thus, once EVM would have versions of arithmetic opcodes, that revert on overflow, their gas cost could be made the same as for existing unsafe versions, or just marginally higher.

Even more useful would be opcodes that do not overflow at all, but return the whole result instead. Such opcodes would permit efficient implementation of arbitrary wide big integers at compiler or library level.

We don’t know why EVM doesn’t have the opcodes described above. Maybe just because other mainstream virtual machines don’t have them?

So far we were telling about real overflow: a situation when calculation result is too big to fit into the result data type. Now it is time to discover the other side of the problem:

Phantom Overflows

How one would calculate 3% of x in Solidity? In mainstream languages one just writes 0.03*x, but Solidity doesn’t support fractions. What about x*3/100? Well, this will work in most cases, but what if x is so large, that x*3 will overflow? From the previous section we know what to do, right? Just use mul from SafeMath and be in the safe side: mul (x, 3) / 100… Not so fast.

The latter version is somewhat more secure, as it reverts where the former version returns incorrect result. This is good, but… Why on earth calculating 3% of something may ever overflow? 3% of something is guaranteed to be less that original value: in both, nominal and absolute terms. So, as long as x fits into 256-bit word, then 3% of x should also fit, shouldn’t it?

Well, I call this “phantom overflow”: a situation when final calculation result would fit into the result data type, but some intermediate operation overflows.

Phantom overflows are much harder to detect and address than real overflows. One solution is to use wider integer type or even floating-point type for intermediate values. Another is to refactor the expression in order to make phantom overflow impossible. Let’s try to do the latter with our expression.

Arithmetic laws tell us that the following formulas should produce the same result:

(x * 3) / 100
(3 * x) / 100
(x / 100) * 3
(3 / 100) * x

However, integer division in Solidity is not the same as division in pure math, as in Solidity it rounds the result toward zero. The first two variants are basically equivalent, and both suffer from phantom overflow. The third variant does not have phantom overflow problem, but is somewhat less precise, especially for small x. The fourth variant is more interesting, as it surprisingly leads to a compilation error:

browser/Junk.sol:5:18: TypeError: Operator * not compatible with types rational_const 3 / 10 and uint256browser/Junk.sol:5:18: TypeError: Operator * not compatible with types rational_const 3 / 10 and uint256

We already described this behavior in our previous article. To make the fourth expression compile we need to change it like this:

(uint (3) / 100) * x

However this does not help much, as the result of corrected expression is always zero, because 3 / 100 rounded towards zero is zero.

Via the third variant we managed to to solve phantom overflow problem at the cost of precision. Actually, precision loss is significant only for small x, while for large x it is negligible. Remember, that for the original expression, phantom overflow problem arises for large x only, so it seems that we may combine both variants like this:

x > SOME_LARGE_NUMBER ? x / 100 * 3 : x * 3 / 100

Here SOME_LARGE_NUMBER could be calculated as (2²⁵⁶-1)/3 and rounding this value down. Now for small x we use original formula, while for large x we use modified formula that do no permit phantom overflow. Looks like we solved phantom overflow problem without significant loss of precision now. Great work, right?

In this particular case, probably yes. But what if we need to calculate not 3%, but rather 3.1415926535%? The formula would be:

x / 1000000000000 * 31415926535 :
x * 31415926535 / 1000000000000

Our SOME_LARGE_NUMBER will become (2²⁵⁶-1)/31415926535. Not so large then. And what about 3.141592653589793238462643383279%? Being good for simple cases, this approach does not seem to scale well.

Via Coinmonks

Bitcoin Casinos — 5 Reasons Why You Should Only Gamble Here

Bitcoin casinos have been around for a few years. Ever since online casinos started letting you gamble with cryptocurrency, players from around the world have flocked to their sites. We live in a society, where the government tries to control every aspect of our lives. So, if we keep allowing ourselves to be tied down by rules and regulations, then all the fun and joy in our lives will cease to exist.

Many adults around the world, enjoy a good gamble once in a while. Tourists from all over, flock to Las Vegas all year long, for a few days of great entertainment. Fine dining, live shows, shopping and of course gambling are all on the menu. Some play and win, some play and lose, but in the end, everyone leaves with a smile on their faces.

Online gambling is exactly the same, except that you can enjoy yourself from any location, around the world, on your computer or mobile device. But, when governments start to intervene with your own personal expenses, that’s where we need to draw the line. So, here are the 5 main reasons you should only gamble at Bitcoin casinos:

1. No Invasion of Privacy

One of the biggest issues with regulation in online gambling is privacy. As an online gambler, you need to pass credit checks, supply documents and copies of your personal ID’s. Furthermore, you need to send statements of wealth to prove that you can afford to gamble. Gambling can be addictive, and some people can lose more than they can afford to. However, it’s a severe invasion of privacy, to ask how much money one has in their bank account. Why would anyone want to tell a casino how money they have?

Additionally, it’s your money! No one should be allowed to tell you where and how to spend it. It’s your responsibility and your decision only. Moreover, no one should be allowed to tell you how much you can spend, daily, monthly or yearly on gambling. 

That’s why cryptocurrency was invented. People got fed up with banks and governments controlling their money. When you gamble with crypto, you can do so anonymously, and you’ll never have to identify yourself. On top of this, you don’t have to provide any docs or go through a verification process. You can just sign up with an email and password, transfer money from your crypto wallet and start gambling.

2. No Country Restrictions

Cryptocurrency is not like Fiat currency. Crypto transactions are borderless, and they do not need to be approved by banks. So, if you want to send money to someone abroad, you can do so instantly, without any hassles. You may need to pay a small fee on each transaction but it’s very minor compared to fees banks charge you. 

When it comes to crypto gambling, transactions are not restricted by any regulations. Many countries around the world do not allow their citizens to gamble online. This is due to state owned gambling and lottery companies keeping their monopolies in place. Another reason is that land-based casinos, with political ties to the government do not want to lose potential visitors and revenues to off-shore companies. 

Since cryptocurrency transactions are fully anonymous, government restrictions and regulations do not apply. For example, if you live in the USA or Australia, where online gambling is prohibited or limited, then BTC casinos that accept cryptocurrency are the place for you. You can gamble freely, without anyone knowing.

3. 100% Guaranteed Transactions 

Every crypto transaction is guaranteed. When you want to send or receive cryptocurrency, all you have to do is enter a valid wallet address and the payment is instantly sent or received. Some platforms charge a small fee per transaction, usually around 2.5%. But it’s nothing compared to credit card and e-wallet transaction fees that can reach up to 20%.

What’s even better is that cryptocurrency casinos offer deposits and withdrawals in many different cryptocurrencies. Although BTC is the most popular currency, crypto gamblers also enjoy playing in XRP, LTC and ETH. So, you can deposit and cash out money in your preferred currency, without exchange rate fees.

4. No Limitations

One thing that gamblers do not like is having a babysitter, telling them Which, What, When and How:

  • Which limits you’re allowed to bet on.
  • What games you’re allowed to play.
  • When, can you play those games.
  • How long, can you play those games for.

In regulated casinos, players need to set limitations before they can start gambling. You need to state how many hours you want to be able to gamble per day as well as setting a daily, weekly or monthly deposit limit. Furthermore, you may also be limited to certain betting stakes. So, imagine trying to play blackjack or roulette with a maximum bet of $10. Where’s the fun in that? On top of that, you may also not have access to all the different casino games available at nonregulated casinos.

Bitcoin casinos have no limitations when it comes to game selections, deposits, withdrawals and playing time. You are your own boss. You can play what you want, when you want, for as high a limit as you like and for as long as you want. On top of that your winnings are not capped, and you have no withdrawal limits. So, you can play freely, without someone watching over your shoulder all the time to see what you’re doing.

Casino games at BTC casinos also tend to have higher payouts, averaging around 96.70% RTP’s, as well as bigger jackpot prizes. So, you tend to get more return for your money, which allows you to play longer and have more fun.

5. Cryptocurrency Value

Which brings us to the main reason to gamble at Bitcoin casinos. Cryptocurrency value is very speculative and volatile. One day 1 Bitcoin could be worth $10,000 and a few hours later it could be worth $12,000 or $9,000. So, when you gamble with cryptocurrency, you could choose to withdraw your winnings when the coin value goes up. Therefore, you’re not only cashing out casino winnings, but you also have the option of cashing in your cryptocurrency for even more money than you started off with. 

Additionally, you can exchange one cryptocurrency for another cryptocurrency that potentially has more value. Since, many different currencies are accepted today at crypto casinos, you can cash in on these rises in coin value. All you have to do is just add additional cryptocurrencies to your casino wallet and play casino games in each currency. 

Big Wall Street reps will continue to talk bad of it, but the reality is that cryptocurrency is good. Many big Fintech companies have already integrated cryptocurrency into their payment systems. For us gamblers, cryptocurrency is a blessing. At KingBit Casino, you can gamble freely without worrying about breaking the law and without limits. Moreover, players from all around the world, including US gamblers are welcome, and you can gamble with many different cryptocurrencies. Crypto is the future and is not going anywhere. So, transfer some coins and have a good time gambling online.

Eight centuries of global real interest rates, 1311–2018

BANK OF ENGLAND, Staff Working Paper No. 845 By Paul Schmelzing, January 2020

Download PDF

In a Bank of England staff paper, economic historian Paul Schmelzing has gathered an incredible 800-years of data on interest rates and inflation going back to the early 1300s. The research combines interest rates on hundreds of loans made to sovereigns by court bankers and wealthy merchants. Schmelzing’s paper has many curious details about medieval financial markets. Not included in his interest rate data, for instance, are loans denominated in various odd units. In times past, a lender might stipulate repayment in chickens, jewellery, land, fruit, wheat, rye, leases for offices, or some sort of entitlement. To keep calculation easier, Schmelzing only collects information on loan that are payable in cash.

Schmelzing’s data shows that real interest rates have been gradually falling for centuries (the real interest rate is the return that one gets on a bond or a loan after adjusting for inflation). The monetary standard seams to have no influence on the trend.