# Bitcoin Transaction Hash in Pure PostgreSQL

In theory, Postgres should be able to verify transactions and blocks, as well as do a lot of other things that are currently only done by full nodes. For this to be performant, it will most likely require an extension written in C, but I’m curious how far we can get with bare bones Postgres.

More importantly, would that actually be useful? A node is really just a database, a very efficient one for a very specific purpose, but would leveraging the full power of Postgres be somehow more beneficial than just running Bitcoin-Qt or btcd, for example?

To get to the bottom of this would be a lot of work, and potentially a lot of fun. It would also be a great blockchain learning exercise. (If you’re working on a PG extension for Bitcoin or more generally blockchain, please do let me know!)

### Random Thoughts

The structure of the Bitcoin blockchain is relatively simple. We have transactions, which in turn have inputs and outputs and belong to blocks. Four tables, that’s it.

I’ve been able to import the whole blockchain with some fairly basic Go code into my old Thinkpad running Linux overnight. The Go code needs some more polishing and is probably worthy of a separate write up, so I won’t get into it for now. Below is the schema I used. I intentionally left out referential integrity and indexes to keep it simple and avoid premature optimization.

There are a couple projects out there that keep the blockchain in a database, most notably Abe. I haven’t studied the code very carefully, but my initial impression was that Abe tries to use standard SQL that would work across most big databases, which is philosophically different from my objective of going 100% Postgres and leveraging all that it can do for us.

Bitcoin uses a lot of uint32’s. A Postgres INT is the correct size, but it is signed, which means we have to use the next larger type, BIGINT. It seems like it might be a waste to use 64 bits for a 32-bit value, but I couldn’t think of anything better than a BIGINT. For the binary stuff it seems like BYTEA is the best match.

So what can we do with this? There is no easy way to create or verify an Elliptic Curve signature in Postgres, but with the help of the pgcrypto extension, we should be able to at least generate the correct SHA256 digest which is used in the signature. As a side note, EC signature math is actually remarkably simple and could probably be implemented as a PG function, but I’m too lazy. Here it is in a few lines of Python.

The rules on how Bitcoin generates the hash (which is then signed) are slightly complicated, and that’s an understatement.

For the purposes of this exercise, I’d just be happy with a value that matches, even if the code does not fully comply with the Bitcoin rules.

One problem I ran into was that, because Bitcoin blockchain is little-endian except for where it isn’t, you often need a way to reverse bytes in a BYTEA. Strangely, Postgres does not provide a way to do that, unless I’m missing something. But thanks to stackoverflow, here is one way to do this:

We also have no way to render a Bitcoin varint, but we can fake it with some substringing for the time being.

Equipped with this, we can construct the following statement, sorry it’s a little long and I do not have the patience to explain it in writing.

The particular transaction we are looking at is this. It happens to have id of 37898 in my database. In case you’re wondering, for this example I used a subset of the blockchain which only has the first 182,000 blocks. On the full blockchain and without indexes, this statement would have taken an eternity to execute.

What makes this particular transaction interesting is that it has two inputs, which is slightly trickier, because to spend them, there need to be two different signatures of the same transaction. This is because before signing, the input scriptSig needs to be replaced with the output’s scriptPubKey (the oversimplified version). This is reflected in the SQL in the use of LATERAL and CASE.

You do not have to take my word that the two hashes are correct, we can verify them fairly easily with a bit of help from the Python ecdsa library. Here is the code to verify the second hash. The key and the signature are in the transaction itself.

I hope this was fun! Now I wonder how hard it would be to make an extension to provide all the functionality required by Bitcoin….

# Electricity Cost of 1 Bitcoin (Sep 2017)

How much does it cost in electricity to mine a Bitcoin?

As of Sep 28, 2017, according to blockchain.info the hashrate is: 9,214,860,125 GH/s.

These days it seems that the best miner available for sale is the AntMiner S9. It is actually over a year old, and there are faster and more energy efficient ASICs now, e.g. BitFury, but it is very hard to get any information on those, so we will just use the S9 information.

The S9 is capable of 12,930 GH/s. The collective Bitcoin hash rate is equivalent to 712,672 S9 miners running in parallel.

An S9 uses 1375W, which means that in 1 hour it consumes 1.375 kW/h.

In USA, a kWh costs $0.12 on average. (It can be as low as 0.04, according to this EIA chart.) At 12c per kWh a running S9 costs$0.165 per hour.

712,672 running S9’s would cost $117,591.02 per hour. Bitcoin blocks are solved at 6 per hour on average. Thus, each block costs$19,598.50 to solve.

The current mining reward is 12.5 BTC, which gives us the answer:

At \$0.12 kW/h a Bitcoin costs \$1,567.88 to mine.

At \$0.04 kW/h a Bitcoin costs \$522.62 to mine.

This, of course, does not include hardware and other costs.

It’s quite likely that the largest mining operations pay even less than $0.04 for electricity and the hardware they use is many times more efficient. While grosly inaccurate, this shows that mining is quite profitable, and that Bitcoin price would have to fall a lot for mining to stop being profitable. ### Looking at the Trend Current difficulty is 1103400932964, The difficulty before that was 922724699725. Difficulty adjusts every 2,016 blocks, which is about two weeks. The Difficulty number is a coefficient of the “difficulty 1 target”, i.e. where the hash has to begin with 4 zero bytes (32 zero bits). It means is that it is N times harder than “1 target”. We can see that at the last adjustment it went up by 180,676,233,239, or 16%, which is quite a bit in just two weeks. The last adjustment before that was from 888171856257, or 4%. Assuming that the only miner in the world was the S9, the difficulty adjustment can only be explained by more S9’s coming online. The number of S9’s online is directly proportional to the hashrate, which is directly proportional to the difficulty. Thus there is a direct relationship between energy cost and the difficulty. When the difficulty was 922724699725 (Sep 6 through 16), the hash rate was at about 8,000,000 TH/s, or equivalent of 618,716 S9’s. At that difficulty and the 12c kW/h price, a BTC cost$1,361 to mine.

Now let’s look back at the world before the S9, which uses the Bitmain BM1387 16nm ASIC. Before the S9, there was S7, based on BitMain BM1385 28nm ASIC. The S7 power consumption is roughly same as S9, or let’s assume it is for simplicity, but it is only capable of 4,000 GH/s.

Back at the end of 2015 when S7 was announced, the hashrate was at around 700,000,000 GH/s, or equivalent to 175,000 S7’s. That cost \$28,875 per hour, or \$4,812.5 per block. The block reward was 25 Bitcoins then, so a Bitcoin would cost only \$192 to mine. (With a 12.5 reward it would have been \$385).

This is all very confusing, but we can see that faster hardware and more of it drives the cost of mining up and the rlationship between the difficulty and the cost of mining a Bitcoin is linear. Faster hardware enables higher hash rate at improved energy efficiency, and the difficulty adjusts to keep the rate of blocks and supply of new BTC at 10 minutes.

The cost factor behind Bitcoin is energy, and spending more energy on mining makes a Bitcoin more expensive and less profitable. However, a more energy-expensive Bitcoin is a more sound/secure Bitcoin from the cryptographic perspective, which means it is likely to go up in USD price, and thus should still be profitable for the miners. This is a very interesting factor here, because if the BTC/USD price wasn’t going up, the miners would be bitter enemies and would do everything possible to prevent more miners from coming online. The rise of the BTC/USD price is what justifies as positive more miners coming online. So far we have not seen any news reports of mining facilities being sabotaged, which probably means miners are not enemies.

I will need to think on this some more as there are a lot of moving parts. But if I can make a cursory conclusion here, it is that (industrial) mining is and will remain very profitable for some time.

# Bitcoin: USD Value

In part 1 I explained how money has always been a global ledger and Bitcoin is just a different implementation of one. The million dollar question remains, what should Bitcoin be worth in a currency we’re more familiar with, such as USD?

### Asset Pricing

To illustrate the dilemma we’re faced with, lets look at three types of assets and how we price them.

Stocks: price is determined by the present and future profits, which is relatively straight forward.

Commodities (e.g oil): price is a function of supply (oil being produced) and demand (oil burned in engines or whatever).

Store of value: this is anything that is bought because it keeps its value. Most commonly it is precious metals like gold, but it can also be fiat currency or valuable works of art. This category is most fitting for Bitcoin. Pricing of a store of value is strangely arbitrary, I attempt to explain it below.

### Speculative Demand

There are two kinds of demand. Actual demand is based in our everyday needs. For example we rely on combustible engines which consume oil. Engines burn oil (or its byproducts) converting it to exhaust gases, at which point it is no more. The more we drive, the more oil we burn, the higher the demand.

The second kind of demand is speculative. It is based on the expectation of a future price change. Speculators buy assets they expect to go up in price and sell those they don’t. When everyone wants to buy oil because they think the price is going up, its price does indeed goes up. But that is not directly related to the actual supply of oil from the ground (it often is, but not always).

### Right Price

In part 1 we covered how a sale is actually a loan, and how the seller ends up with tokens representing the value owed to the seller. The number of tokes (aka price) is a reflection of how we value things relative to each other.

When we buy stuff for everyday use we establish a price range that is driven by, for lack of a better term, common sense. For example we may think that a loaf of bread is worth a dozen eggs. If the price of something exceeds common sense, we will forego buying it. This means that the price has a direct effect on demand especially when it comes to everyday consumption items such as food or fuel.

Speculators have no respect for common sense and the right price. They are only concerned with the trend. It is possible for the speculative demand to drive the price way above the common sense level, we saw that when “peak oil” was a thing. But the price will eventually gravitate towards the common sense price.

### Store of Value Price

In contrast, price for a store of value is purely speculative, which means the sensibility of the price does not apply.

Let’s take gold, for example. Intuitively we might reason that there is a (non-speculative) supply and demand for gold, but it’s actually illusory. The annual production of gold is minute compared to the total gold above ground, which means there is essentially no supply. There is also next to no demand, because gold cannot be consumed. There is never less or more gold available in the world, its quantity is fairly constant. Yet the price fluctuates. The only explanation for this is speculation.

There is simply no such thing as the “right price” for store of value. If I want to move a million dollars into gold, the price of gold is of no consequence to me, be it a thousand or a million dollars per ounce. So long as I know that it is stable, it is a good store of value.

The good news here is that no price is too high (or too low) for Bitcoin. 4K only seems high because a year ago it was 400. We tend to judge the price based on history, and there is good sense in that, indeed what goes up in value too much too fast often subsequently corrects.

### Importance of Market Cap

Market capitalization is the price of all of the asset available in the world. It’s easy to compute the market cap for a stock because we know the number of shares outstanding. There is no such thing as a market cap for a commodity because it is continuously produced and consumed. When it comes to something like gold, we can estimate the market cap because we know approximately how much physical gold is above ground. Bitcoin, like gold, has an approximate market cap (approximate because it is not possible to know how much BTC has been lost).

Market cap size is critical for adoption of a store of value. It needs to be large enough to “fit” even very large amounts of fiat, ideally without affecting the market. Gold market cap is estimated at 7 trillion USD, which means that even the richest people can move all their assets into gold and not move the market. (At least one at a time. All of them at once will move the market big time).

Bitcoin market cap of about 70B USD is not large enough for even one of the richest people on the planet. This implies that if the market cap does not grow, Bitcoin is likely to fail as store of value.

### Hash Rate

What sets Bitcoin apart from all other crypto currencies is its extremely high hash rate. This means that a Bitcoin is orders of magnitude more “precious” than any other crypto coin presently in existence.

There is a definite correlation between the Bitcoin hash rate and the price. Some people argue that hash rate follows price, not the other way around, and it’s probably true.

Bitcoin’s high hash rate is what makes it the best store of value among crypto coins today.

Ultimately, I believe adoption is the most important factor in Bitcoin USD price. Greater adoption will increase the number of speculators willing to own Bitcoin, it will drive the price up, and hopefully bring it to a level comparable to that of gold.

The key to adoption is not ease of payment or volume of transactions like we used to think until very recently. The key to adoption is understanding of the mathematics behind Bitcoin. With all the hype surrounding it, only remarkably few understand how sound Bitcoin actually is. In many ways it is more sound than any other store of value known, including gold.

Regardless of whether Bitcoin becomes de facto digital gold or not, we are witnessing a historic transformation possibly bigger than the Internet itself.

# Bitcoin: Better Ink Than Gold?

The fundamental question about Bitcoin is not whether it is sound from the cryptography standpoint. The question is: what is it?

### Money is Debt Ink

To define Bitcoin we need to look back at the history of money. The earliest money was in the form of things that were scarce and impossible to falsify, something like specific kinds of sea shells. Everyone knew that stuff could be traded for these tokens.

Once such monetary tokens were invented, we no longer needed to decide what to barter right there and then, we could postpone the decision. One could sell milk for tokens, then use those tokens to buy a spear later, this way the milk didn’t spoil while waiting for the spear to be made.

What is not very obvious is that the tokens represented debt. A sale is really a loan in disguise. Before the sale, the seller had milk. After the sale, the seller had tokens, which are proof that value of tokens is owed to the seller. In other words, the tokens received for the milk sold were a record of debt. Tokens are the ink in which this record is written.

It is noteworthy that there is no money if there is no debt, or that money implies debt. It’s a simple principle that so few understand.

### World-Wide Debt Ledger

The best way of thinking about money is that it is the medium in which we maintain a world-wide record of debt. The entries in this book or ledger are written as physical tokens. Only the people in possession of the tokens actually know how much they have and there is no history, only the final state. The history exists only in the minds (or records) of the traders. It is very private.

### Gold Ink

Later people started using rare metals such as gold or silver as money. Metals were better than sea shells because they were divisible. We could now make arbitrary size tokes we called (coined?) coins.

Although we intuitively think that gold has a lot of value, in reality it has very little. Gold does not feed us or keep us warm. It does have some unique properties, but back when we started using gold as money we couldn’t possibly appreciate those, other than perhaps gold being pretty and extremely durable.

Gold is also rare. But rarity does not imply value. The sea shells were worthless before they were used as money, and they are worthless now, yet they too are rare.

### Banks, Paper and Records of Records (of Records)

But it turned out that keeping valuable tokens was difficult, they could be lost or stolen, and worse, people were willing to kill for them. And so we decided to keep them all safe in one place. This was the original bank.

The bank issued paper notes that corresponded to the gold in the vault. Now these paper notes could be traded for anything. This was because people knew that even though the paper is worthless, it represents gold that is in the bank. At any time one could go to the bank, give the bank the paper note and receive gold (at which point the bank would destroy the paper note because the debt is settled).

A paper note is a record of the record of debt. The true record was in gold, paper was a copy. It’s a bit of a mind-twister, but humans have become really good at rewriting the original debt ledger in other mediums.

Ironically the concept of the bank as a safe vault never really worked: people were willing to steal and kill for paper money just the same. These days bank vaults keep paper notes as if its gold. And the bank’s computer keeps a record of the record of the record of debt.

### Real Estate Ink

At some point bankers realized that they can manipulate the monetary supply because only the bankers actually knew how much gold they had. It was done “for the good of the public” who could get easier loans, but it was also an easy way for the banks to make money out of nothing.

Eventually it was decreed that not just gold, but anyhting could be similarly held by the bank so that vastly more paper notes could be issued. Most notably real estate, the arrangement of issuing paper notes for a house being known as a mortgage. And since a house cannot be placed into the vault, it too had to be recorded, creating yet another layer of abstraction. It all ended with collateralized debt obligations, credit default swaps and ultimately the 2008 subprime mortgage crisis. Next year Bitcoin was born…

### Monetary System is Just a Ledger

The bottom line remains: we kept a legder. The recording medium was precious metals, then evolved to paper and metals, and finally when we went off the gold standard it became just paper reflecting value of arbitrary things held under lien as collateral.

### Enter Blockchain

The Bitcoin blockchian is also a medium for this legder, only instead of relying on scarcity of precious metals, the scarcity is in the mathematical complexity of a problem.

And this is where our minds begin to play tricks on us, because this is a concept previously unknown to humans. A Bitcoin, which takes an enormous amount of computational power to generate, is actually, really scarce. Yes, it is not physical, it is just “knowlegde” or “information”, but by all laws of nature it is scarce, in fact more scarce than gold, the total amount of which in the universe isn’t fully known.

### But Bitcoin is just an Agreement?

Interestingly, Bitcoin is merely an agreement and one might argue that some day we can collectively decide to increase the 21 million limit thereby diluting Bitcoin value. But can we actually do that, or will it not be Bitcoin at that point? I think only time will tell.

We do already have a lot of things that we agree on and we don’t really question how it happened. The aforementioned shells were collectively agreed upon. We agree on what the current date is, does it matter how it happened? In fact, much of what the world is, just is, including the fiat money (where “fiat” literally means “let it be done”). And so now Bitcoin just is.

The sea shells ceased to exist as money in favor or precious metals, and it is likely that same will at some point happen with Bitcoin.

History does show that when it comes to money, people show their worst traits. This is why countries with solid currencies have big armies and police, and very strict laws regarding manipulation of money. This is how “fiat” actually works.

Amazingly, the mathematic principles on which Bitcoin is based do not need to be defended. No army in the world could ever change a single prime number.

### Alternative Realities

The name Bitcoin refers to a specific blockchain. There can be many like it. The name could have been different, the parameters of the algorithm could have been different, just like the dollar bills could have been blue. There are other cryptographic currencies, and they are different, they too now exist. (Caveat: some of them are mathematically bogus).

One could argue that gold exists in nature, while Bitcoin was created by man, and thus gold is somehow more real. But Bitcoin rests on the mathematical principles that too are just part of this universe, they were not created by man, they were discovered and applied, and again in this sense Bitcoin isn’t much different than gold.

### The Mystery of Value

The mystery to me is how we collectively set a value of things like gold or Bitcoins. Now that we’ve demonstrated that as money, they are equivalent. Why is an ounce of gold worth $1300? Who decided that? The market? Is the real value of it in how good of an ink it is in the world-wide debt ledger? To be continued… # Tgres Status - July 4th 2017 | Comments It’s been a while since I’ve written on Tgres, here’s a little update, Independence Day edition. ### Current Status The current status is that Tgres is looking more and more like a finished product. It still needs time and especially user testing (the ball is in your court, dear reader), because only time reveals the weirdest of bugs and validates stability. I would not ditch your current stack just yet, but at this point you’d be remiss not having given Tgres a spin. Recently I had an opportunity to test Tgres as a mirror replica of a sizable Graphite/Statsd/Grafana set up receiving approximately 10K data points per second across more than 200K series, and the results were inspiring. Tgres handled the incoming data without breaking a sweat on “hardware” (ec2 instances, rather) that was a fraction of the Graphite machines while still outperforming it in most respects. I’d say the biggest problem (and not really a Tgres one) is that mirroring Graphite functionality exactly is next to impossible. Or, rather, it is possible, but it would imply purposely introducing inaccuracies and complexities. Because of this Tgres can never be a “drop in” replacement for Graphite. Tgres can provide results that are close but not identical, and dashboards and how the data is interpreted would require some rethinking. ## What’s new? ### Data Point Versioning In a round-robin database slot values are overwritten as time moves forward and the archive comes full-circle. Whenever a value is not overwritten for whatever reason, a stale value from an obsolete iteration erroneously becomes the value for the current iteration. One solution is to be diligent and always make sure that values are overwritten. This solution can be excessively I/O intensive for sparse series. If a series is sparse, then more I/O resources are spent blanking out non-data (by setting the value to NaN or whatever) than storing actual data points. A much more efficient approach is to store a version number along with the datapoint. Every time the archive comes full-circle, version is incremented. With versions there is no need to nullify slots, they become obsoleted by virtue of the data point version not matching the current version. Under the hood Tgres does this by keeping a separate array in the ts table which contains a smallint (2 bytes) for every data point. The tv view is aware of it and considers versions without exposing any details, in other words everything works as before, only Tgres is a lot more efficient and executes a lot less SQL statements. ### Zero Heartbeat Series Tgres always strives to connect the data points. If two data points arrive more than a step apart, the slots in between are filled in to provide continuity. A special parameter called Heartbeat controls the maximum time between data points. A gap greater than the Heartbeat is considered unknown or NaN. This was a deliberate design decision from the beginning, and it is not changing. Some tools choose to store data points as is, deferring any normalization to the query time. Graphite is kind of in the middle: it doesn’t store the data points as is, yet it does not attempt to do any normalization either, which ultimately leads to inaccuracies which I describe in another post. The concept of Heartbeat should be familiar to those experienced with RRDTool, but it is unknown to Graphite users which has no such parameter. This “disconnected” behavior is often taken advantage of to track things that aren’t continuous but are event occurrences which can still benefit from being observed as a time series. Tracking application deploys, where each deploy is a data value of 1 is one such example. Tgres now supports this behavior when the the Heartbeat is set to 0. Incoming data points are simply stored in the best matching slot and no attempt is made to fill in the gap in between with data. ### Tgres Listens to DELETE Events This means that to delete a DS all you need to do is run DELETE FROM ds WHERE ... and you’re done. All the corresponding table rows will be deleted by Postgres because of the foreign key constraints, and the DS will be cleared from the Tgres cache at the same time. This is possible thanks to the Postgres excellent LISTEN/NOTIFY capability. ### In-Memory Series for Faster Querying A subset of series can be kept entirely in memory. The recent testing has shown that people take query performance very seriously, and dashboards with refresh rates of 5s or even 1s are not unheard of. When you have to go to the database to answer every query, and if the dashboard touches a hundred series, this does not work too well. To address this, Tgres now keeps an in-memory cache of queried series. The cache is an LRU and its size is configurable. On restart Tgres saves cache keys and loads the series back to keep the cache “warm”. Requests for some cached queries can now be served in literally microseconds, which makes for some pretty amazing user experience. ### DS and RRA State is an Array One problem with the Tgres table layout was that DS and RRA tables contained frequently updated columns such as lastupdate, value and duration The initial strategy was that these could be updated periodically in a lzay fashion, but it became apparent that it was not practical for any substantial number of series. To address this all frequently mutable attributes are now stored in arrays, same way as data points and therefore can be updated 200 (or whatever segment width is configured) at a time. To simplify querying DSs and RRAs two new views (dsv and rrav) were created which abstract away the array value lookup. ### Whisper Data Migration The whisper_import tool has been pretty much rewritten and has better instructions. It’s been tested extensively, though admittedly on one particular set up, your mileage may vary. ### Graphite DSL Lots and lots of fixes and additions to the Graphite DSL implementation. Tgres still does not support all of the functions, but that was never the plan to begin with. ## Future Here’s some ideas I might tackle in the near future. If you are interested in contributing, do not be shy, pull requests, issues and any questions or comments are welcome. (Probably best to keep development discussion in Github). • #### Get rid of the config file Tgres doesn’t really need a config file - the few options that are required for running should be command line args, the rest, such as new series specs should be in the database. • #### A user interface Not terribly high on the priority list, since the main UI is psql for low level stuff and Grafana for visualization, but something to list series and tweak config options might come in handy. • #### Track Usage It would be interesting to know how many bytes exactly a series occupies, how often it is updated and queried, and what is the resource cost for maintaining it. • #### Better code organization For example vcache could be a separate package. • #### Rethink the DSL There should be a DSL version 2, which is not based on the Graphite unwieldiness. It should be very simple and versatile and not have hundreds of functions. • #### Authentication and encryption No concrete ideas here, but it would be nice to have a plan. • #### Clustering needs to be re-considered The current clustering strategy is flawed. It might work with the current plan, but some serious brainstorming needs to happen here. Perhaps it should just be removed in favor of delegating horizontal scaling to the database layer. # Building a Go Web App in 2017 | Comments Update: part 2 is here, enjoy. And part 3. And part 4. A few weeks ago I started building yet another web-based app, in Go. Being mostly a back-end developer, I don’t have to write web apps very often, and every time I do, it seems like a great challenge. I often wish someone would write a guide to web development for people who do not have all day to get into the intricacies of great design and just need to build a functional site that works without too much fuss. I’ve decided to use this opportunity to start from scratch and build it to the best of my understanding of how an app ought to be built in 2017. I’ve spent many hours getting to the bottom of all things I’ve typically avoided in the past, just so that for once in many years I can claim to have a personal take on the matter and have a reusable recipe that at least works for me, and hopefully not just me. This post is the beginning of what I expect to be a short series highlighting what I’ve learned in the process. The first post is a general introduction describing the present problematic state of affairs and why I think Go is a good choice. The subsequent posts have more details and code. I am curious whether my experience resonates with others, and what I may have gotten wrong, so don’t hesitate to comment! Edit: If you’d rather just see code, it’s here. ### Introduction In the past my basic knowledge of HTML, CSS and JavaScript has been sufficient for my modest site building needs. Most of the apps I’ve ever built were done using mod_python directly using the publisher handler. Ironically for an early Python adopter, I’ve also done a fair bit of work with Rails. For the past several years I focused on (big) data infrastructure, which isn’t web development at all, though having to build web-based UI’s is not uncommon. In fact the app I’m referring to here is a data app, but it’s not open source and what it does really doesn’t matter for this discussion. Anyway, this should provide some perspective of where I come from when approaching this problem. ### Python and Ruby As recently as a year ago, Python and Ruby would be what I would recommend for a web app environment. There may be other similar languages, but from where I stand, the world is dominated by Python and Ruby. For the longest time the main job of a web application was constructing web pages by piecing HTML together server-side. Both Python and Ruby are very well suited for the template-driven work of taking data from a database and turning it into a bunch of HTML. There are lots of frameworks/tools to choose from, e.g. Rails, Django, Sinatra, Flask, etc, etc. And even though these languages have certain significant limitations, such as the GIL, the ease with which they address the complexity of generating HTML is far more valuable than any trade-offs that came with them. ### The GIL The Global Interpreter Lock is worthy of a separate mention. It is the elephant in the room, by far the biggest limitation of any Python or Ruby solution. It is so crippling, people can get emotional talking about it, there are endless GIL discussions in both Ruby and Python communities. For those not familiar with the problem - the GIL only lets one thing happen at a time. When you create threads and it “looks” like parallel execution, the interpreter is still executing instructions sequentially. This means that a single process can only take advantage of a single CPU. There do exist alternative implementations, for example JVM-based, but they are not the norm. I’m not exactly clear why, they may not be fully interchangeable, they probably do not support C extensions correctly, and they might still have a GIL, not sure, but as far as I can tell, the C implementation is what everyone uses out there. Re-implementing the interpreter without the GIL would amount to a complete rewrite, and more importantly it may affect the behavior of the language (at least that’s my naive understanding), and so for this reason I think the GIL is here to stay. Web apps of any significant scale absolutely require the ability to serve requests in parallel, taking advantage of every CPU a machine has. Thus far the only possible solution known is to run multiple instances of the app as separate processes. This is typically done with help of additional software such as Unicorn/Gunicorn with every process listening on its own port and running behind some kind of a connection balancer such as Nginx and/or Haproxy. Alternatively it can be accomplished via Apache and its modules (such as mod_python or mod_wsgi), either way it’s complicated. Such apps typically rely on the database server as the arbiter for any concurrency-sensitive tasks. To implement caching without keeping many copies of the same thing on the same server a separate memory-based store is required, e.g. Memcached or Redis, usually both. These apps also cannot do any background processing, for that there is a set of tools such as Resque. And then all these components need to be monitored to make sure it’s working. Logs need to be consolidated and there are additional tools for that. Given the inevitable complexity of this set up there is also a requirement for a configuration manager such as Chef or Puppet. And still, these set ups are generally not capable of maintaining a large number of long term connections, a problem known as C10K. Bottom line is that a simple database-backed web app requires a whole bunch of moving parts before it can serve a “Hello World!” page. And nearly all of it because of the GIL. ### Emergence of Single Page Applications More and more, server-side HTML generation is becoming a thing of the past. The latest (and correct) trend is for UI construction and rendering to happen completely client-side, driven by JavaScript. Apps whose user interface is fully JS-driven are sometimes called Single Page Applications, and are in my opinion the future whether we like it or not. In an SPA scenario the server only serves data, typically as JSON, and no HTML is constructed there. In this set up, the tremendous complexity introduced primarily so that a popular scripting language could be used isn’t worth the trouble. Especially considering that Python or Ruby bring little to the table when all of the output is JSON. ### Enter Golang Go is gradually disrupting the the world of web applications. Go natively supports parallel execution which eliminates the requirement for nearly all the components typically used to work around the GIL limitation. Go programs are binaries which run natively, so there is no need for anything language-specific to be installed on the server. Gone is the problem of ensuring the correct runtime version the app requires, there is no separate runtime, it’s part of the binary. Go programs can easily and elegantly run tasks in the background, thus no need for tools like Resque. Go programs run as a single process which makes caching trivial and means Memcached or Redis is not necessary either. Go can handle an unlimited number of parallel connections, eliminating the need for a front-end guard like Nginx. With Go the tall stack of Python, Ruby, Bundler, Virtualenv, Unicorn, WSGI, Resque, Memcached, Redis, etc, etc is reduced to just one binary. The only other component generally still needed is a database (I recommend PostgreSQL). It’s important to note that all of these tools are available as before for optional use, but with Go there is the option of getting by entirely without them. To boot this Go program will most likely outperform any Python/Ruby app by an order of magnitude, require less memory, and with fewer lines of code. The short answer is: a framework is entirely optional and not recommended. There are many projects claiming to be great frameworks, but I think it’s best to try to get by without one. This isn’t just my personal opinion, I find that it is generally shared in the Go community. It helps to think why frameworks existed in the first place. On the Python/Ruby side this was because these languages were not initially designed to serve web pages, and lots of external components were necessary to bring them up to the task. Same can be said for Java, which just like Python and Ruby, is about as old as the web as we know it, or even pre-dates it slightly. As I remember it, out of the box, early versions of Python did not provide anything to communicate with a database, there was no templating, HTTP support was confusing, networking was non-trivial, bundling crypto would not even be legal then, and there was a whole lot of other things missing. A framework provided all the necessary pieces and set out rules for idiomatic development for all the common web app use cases. Go, on the other hand, was built by people who already experienced and understood web development. It includes just about everything necessary. An external package or two can be needed to deal with certain specific aspects, e.g. OAuth, but by no means does a couple of packages constitute a “framework”. If the above take on frameworks not convincing enough, it’s helpful to consider the framework learning curve and the risks. It took me about two years to get comfortable with Rails. Frameworks can become abandoned and obsolete, porting apps to a new framework is hard if not impossible. Given how quickly the information technology sands shift, frameworks are not to be chosen lightly. I’d like to specifically single out tools and frameworks that attempt to mimic idioms common to the Python, Ruby or the JavaScript environments. Anything that looks or feels or claims to be “Rails for Go”, features techniques like injection, dynamic method publishing and the like which require relying heavily on reflection are not the Go way of doing things, it’s best to stay away from those. Frameworks definitely do make some things easier, especially in the typical business CRUD world, where apps have many screens with lots of fields, manipulating data in complex and ever-changing database schemas. In such an environment, I’m not sure Go is a good choice in the first place, especially if performance and scaling are not a priority. Another issue common to frameworks is that they abstract lower level mechanisms from the developer often in way that over time grows to be so arcane that it is literally impossible to figure out what is actually happening. What begins with an idiomatic alias for a single line of JavaScript becomes layers upon layers of transpilers, minimizers on top of helpers hidden somewhere in a sub-dependency. One day something breaks and it’s impossible to know where to even look for the problem. It’s nice to know exactly what is going on sometimes, Go is generally very good about that. ### What about the database and ORM? Similarly to frameworks, Go culture is not big on ORM’s. For starters, Go specifically does not support objects, which is what the O in ORM stands for. I know that writing SQL by hand instead of relying on User.find(:all).filter... convenience provided to by the likes of ActiveRecord is unheard of in some communities, but I think this attitude needs to change. SQL is an amazing language. Dealing with SQL directly is not that hard, and quite liberating, as well as incredibly powerful. Possibly the most tedious part of it all is copying the data from a database cursor into structures, but here I found the sqlx project very useful. ### In Conclusion I think this sufficiently describes the present situation of the server side. The client side I think could be separate post, so I’ll pause here for now. To sum up, thus far it looks like we’re building an app with roughly the following requirements: • Minimal reliance on third party packages. • No web framework. • PostgreSQL backend. • Single Page Application. part 2 # Building a Go Web App - Part 2 | Comments This is a continuation of part 1. (There is also part 3 and part 4). So our app is going to have two major parts to it: client and server. (What year is this?). The server side is going to be in Go, and the client side in JS. Let’s talk about the server side first. ## The Go (Server) Side The server side of our application is going to be responsible for initially serving up all the necessary JavaScript and supporting files if any, aka static assets and data in the form of JSON. That’s it, just two functions: (1) static assets and (2) JSON. It’s worth noting that serving assets is optional: assets could be served from a CDN, for example. But what is different is that it is not a problem for our Go app, unlike a Python/Ruby app it can perform on par with Ngnix and Apache serving static assets. Delegating assets to another piece of software to lighten its load is no longer necessary, though certainly makes sense in some situations. To make this simpler, let’s pretend we’re making an app that lists people (just first and last name) from a database table, that’s it. The code is here https://github.com/grisha/gowebapp. ### Directory Layout It has been my experience that dividing functionality across packages early on is a good idea in Go. Even if it is not completely clear how the final program will be structured, it is good to keep things separate to the extent possible. For a web app I think something along the lines of the following layout makes sense: ### Top level: package main At the top level we have package main and its code in main.go. The key advantage here is that with this layout go get github.com/user/foo can be the only command required to install the whole application into $GOPATH/bin.

Package main should do as little as possible. The only code that belongs here is to parse the command argument flags. If the app had a config file, I’d stick parsing and checking of that file into yet another package, probably called config. After that main should pass the control to the daemon package.

An essential main.go is:

The above program accepts three parameters, -listen, -db-connect and -assets-path, nothing earth shattering.

#### Using structs for clarity

In line cfg := &daemon.Config{} we are creating a daemon.Config object. It’s main purpose is to pass around configuration in a structured and clear format. Every one of our packages defines its own Config type describing the parameters it needs, and packages can include other package configs. We see an example of this in processFlags() above: flag.StringVar(&cfg.Db.ConnectString, .... Here db.Config is included in daemon.Config. I find doing this very useful. These structures also keep open the possibility of serializing configs as JSON, TOML or whatever.

#### Using http.FileSystem to serve assets

The http.Dir(assetsPath) in setupHttpAssets is in preparation to how we will serve the assets in the ui package. The reason it’s done this way is to leave the door open for cfg.UI.Assets (which is a http.FileSystem interface) to be provided by other implementations, e.g. serving this content from memory. I will describe it in more detail in a later post.

Lastly, main calls daemon.Run(cfg) which is what actually starts our app and where it blocks until it’s terminated.

### package daemon

Package daemon contains everything related to running a process. Stuff like which port to listen on, custom logging would belong here, as well anything related to a graceful restart, etc.

Since the job of the daemon package is to initiate the database connection, it will need to import the db package. It’s also responsible for listening on the TCP port and starting the user interface for that listener, therefore it needs to import the ui package, and since the ui package needs to access data, which is done via the model package, it will need to import model as well.

A bare bones daemon might look like this:

Note how Config includes db.Config and ui.Config as I described earlier.

All the action happens in Run(*Config). We initialize a database connection, create a model.Model instance, and start the ui passing in the config, a pointer to the model and the listener.

### package model

The purpose of model is to separate how data is stored in the database from the ui, as well as to contain any business logic an app might have. It’s the brains of the app if you will.

The model package should define a struct (Model seems like an appropriate name) and a pointer to an instance of the struct should be passed to all the ui functions and methods. There should only be one such instance in our app - for extra credit you can enforce that programmatically by making it a singleton, but I don’t think that’s necessary.

Alternatively you could get by without a Model and just use the package model itself. I don’t like this approach, but it’s an option.

The model should also define structs for the data entities we are dealing with. In our example it would be a Person struct. Its members should be exported (capitalized) because other packages will be accessing those. If you use sqlx, this is where you would also specify tags that map elements to db column names, e.g. db:"first_name"

Our Person type:

In our case we do not need tags because our column names match the element names, and sqlx conveniently takes care of the capitalization, so Last matches the column named last.

#### package model should NOT import db

Somewhat counter-intuitive, model cannot import db. This is because db needs to import model, and circular imports are not allowed in Go. This is one case where interfaces come in handy. model needs to define an interface which db should satisfy. For now all we know is we need to list people, so we can start with this definition:

Our app doesn’t really do much, but we know it lists people, so our model should probably have a People() ([]*Person, error) method:

To keep things tidy, code should be in separate files, e.g. Person definition should be in person.go, etc. For readability, here is a single file version of our model:

### package db

db is the actual implementation of the database interaction. This is where the SQL statements are constructed and executed. This package also imports model because it will need to construct those structs from database data.

First, db needs to provide the InitDb function which will establish the database connection, as well as create the necessary tables and prepare the SQL statements.

Our simplistic example doesn’t support migrations, but in theory this is also where they might potentially happen.

We are using PostgreSQL, which means we need to import the pq driver. We are also going to rely on sqlx, and we need our own model. Here is the beginning of our db implementation:

Out InitDb() creates an instance of a pgDb, which is our Postgres implementation of the model.db interface. It keeps all that we need to communicate with the database, including the prepared statements, and exports the necessary methods to satisfy the interface.

Here is the code to create the tables and the statements. From the SQL perspective this is rather simplistic, it could be a lot more elaborate, of course:

Finally, we need to provide the method to satisfy the interface:

Here we’re taking advantage of sqlx to run the query and construct a slice from results with a simple call to Select() (NB: p.sqlSelectPeople is a *sqlx.Stmt). Without sqlx we would have to iterate over the result rows, processing each with Scan, which would be considerably more verbose.

Beware of a very subtle “gotcha” here. people could also be defined as var people []*model.Person and the method would work just the same. However, if the database returned no rows, the method would return nil, not an empty slice. If the result of this method is later encoded as JSON, the former would become null and the latter []. This could cause problems if the client side doesn’t know how to treat null.

That’s it for db.

### package ui

Finally, we need to serve all that stuff via HTTP and that’s what the ui package does.

Here is a very simplistic variant:

Note how indexHTML contains next to nothing. This is 100% of the HTML that this app will ever serve. It will evolve a little as we get into the client side of the app, but only by a few lines.

Also noteworthy is how the handler is defined. If this idiom is not familiar, it’s worth spending a few minutes (or a day) to internalize it completely as it is very common in Go. indexHandler() is not a handler, it returns a handler function. It is done this way so that we can pass in a *model.Model via closure, since an HTTP handler function definition is fixed and a model pointer is not one of the parameters.

In the case of indexHandler() we’re not actually doing anything with the model pointer, but when we get to implementing an actual list of people we will need it.

### Conclusion

Above is essentially all the knowledge required to build a basic Go web app, at least the Go side of it. Next week I’ll get into the client side and we will complete the people listing code.

Continue to part 3.

# Building a Go Web App - Part 3

This is part 3. See part 1 and part 2.

The previous two posts got us to a point where we had a Go app which was able to serve a tiny bit of HTML. This post will talk about the client side, which, alas, is mostly JavaScript, not Go.

### JavaScript in 2017

This is what gave me the most grief. I don’t really know how to categorize the mess that present day JavaScript is, nor do I really know what to attribute it to, and trying to rationalize it would make for a great, but entirely different blog post. So I’m just going to accept this as the reality we cannot change and move on to how to best work with it.

### Variants of JS

The most common variant of JS these days is known as ES2015 (aka ES6 or ECMAScript 6th Edition), and it is mostly supported by the more or less latest browsers. The latest released spec of JavaScript is ES7 (aka ES2016), but since the browsers are sill catching up with ES6, it looks like ES7 is never really going to be adopted as such, because most likely the next coming ES8 which might be released in 2017 will supersede it before the browsers are ready.

Curiously, there appears to be no simple way to construct an environment fully specific to a particular ECMAScript version. There is not even a way to revert to an older fully supported version ES5 or ES4, and thus it is not really possible to test your script for compliance. The best you can do is to test it on the browsers you have access to and hope for the best.

Because of the ever changing and vastly varying support for the language across platforms and browsers, transpilation has emerged as a common idiom to address this. Transpilation mostly amounts to JavaScript code being converted to JavaScript that complies with a specific ES version or a particular environment. For example import Bar from 'foo'; might become var Bar = require('foo');. And so if a particular feature is not supported, it can be made available with the help of the right plug-in or transpiler. I suspect that the transpilation proliferation phenomenon has led to additional problems, such as the input expected by a transpiler assuming existence of a feature that is no longer supported, same with output. Often this might be remedied by additional plugins, and it can be very difficult to sort out. On more than one occasion I spent a lot of time trying to get something to work only to find out later that my entire approach has been obsoleted by a new and better solution now built-in to some other tool.

### JS Frameworks

There also seems to be a lot of disagreement on which JS framework is best. It is even more confusing because the same framework can be so radically different from one version to the next I wonder why they didn’t just change the name.

I have no idea which is best, and I only had the patience to try a couple. About a year ago I spent a bunch of time tinkering with AngularJS, and this time, for a change, I tinkered with React. For me, I think React makes more sense, and so this is what this example app is using, for better or worse.

### React and JSX

If you don’t know what React is, here’s my (technically incorrect) explanation: it’s HTML embedded in JavaScript. We’re all so brainwashed into JavaScript being embedded in HTML as the natural order of things, that inverting this relationship does not even occur as a possibility. For the fundamental simplicity of this revolutionary (sic) concept I think React is quite brilliant.

A react “Hello World!” looks approximately like this:

Notice how the HTML just begins without any escape or delimiter. Surprisingly, the opening “<” works quite reliably as the marker signifying beginning of HTML. Once inside HTML, the opening curly brace indicates that we’re back to JavaScript temporarily, and this is how variable values are interpolated inside HTML. That’s pretty much all you need to know to “get” React.

Technically, the above file format is known as JSX, while React is the library which provides the classes used to construct React objects such as React.Component above. JSX is transpiled into regular JavaScript by a tool known as Babel, and in fact JSX is not even required, a React component can be written in plain JavaScript, and there is a school of thought whereby React is used without JSX. I personally find the JSX-less approach a little too noisy, and I also like that Babel allows you to use a more modern dialect of JS (though not having to deal with a transpiler is definitely a win).

### Minimal Working Example

First, we need three pieces of external JavaScript. They are (1) React and ReactDOM, (2) Babel in-browser transpiler and (3) a little lib called Axios which is useful for making JSON HTTP requests. I get them out of Cloudflare CDN, there are probably other ways. To do this, we need to augment our indexHTML variable to look like this:

At the very end it now loads "/js/app.jsx" which we need to accommodate as well. Back in part 1 we created a UI config variable called cfg.Assets using http.Dir(). We now need to wrap it in a handler which serves files, and Go conveniently provides one:

With the above, all the files in "assets/js" become available under "/js/".

Finally we need to create the assets/js/app.jsx file itself:

The only difference from the previous listing is the very last line, which is what makes the app actually render itself.

If we now hit the index page from a (JS-capable) browser, we should see a “Hello World”.

What happened was that the browser loaded “app.jsx” as it was instructed, but since “jsx” is not a file type it is familiar with, it simply ignored it. When Babel got its chance to run, it scanned our document for any script tags referencing “text/babel” as its type, and re-requested those pages (which makes them show up twice in developer tools, but the second request ought to served entirely from browser cache). It then transpiled it to valid JavaScript and executed it, which in turn caused React to actually render the “Hello World”.

### Listing People

We need to first go back to the server side and create a URI that lists people. In order for that to happen, we need an http handler, which might look like this:

And we need to register it:

Now if we hit "/people", we should get a "[]" in response. If we insert a record into our people table with something along the lines of:

The response should change to [{"Id":1,"First":"John","Last":"Doe"}].

Finally we need to hook up our React/JSX code to make it all render.

For this we are going to create a PersonItem component, and another one called PeopleList which will use PersonItem.

A PersonItem only needs to know how to render itself as a table row:

A PeopleList is slightly more complicated:

It has a constructor which initializes a this.state variable. It also declared a componentDidMount() method, which React will call when the component is about to be rendered, making it the (or one of) correct place to fetch the data from the server. It fetches the data via an Axios call, and saves the result in this.state.people. Finally, render() iterates over the contents of this.state.people creating an instance of PersonItem for each.

That’s it, our app now responds with a (rather ugly) table listing people from our database.

### Conclusion

In essence, this is all you need to know to make a fully functional Web App in Go. This app has a number of shortcomings, which I will hopefully address later. For example in-browser transpilation is not ideal, though it might be fine for a low volume internal app where page load time is not important, so we might want to have a way to pre-transpile it ahead of time. Also our JSX is confined to a single file, this might get hard to manage for any serious size app where there are lots of components. The app has no navigation. There is no styling. There are probably things I’m forgetting about…

Enjoy!

P.S. Complete code is here

Continued in part 4

# Building a Go Web App - Part 4

This is part 4. See part 1, part 2 and part 3.

In this part I will try to briefly go over the missing pieces in our very simplistic Go Web App.

### HTTP Handler Wrappers

I tiny rant: I do not like the word “middleware”. The concept of a wrapper has been around since the dawn of computing, there is no need to invent new words for it.

Having that out of the way, let’s say we need to require authentication for a certain URL. This is what our index handler presently looks like:

We could write a function which takes an http.Handler as an argument and returns a (different) http.Handler. The returned handler checks whether the user is authenticated with m.IsAuthenticated() (whatever it does is not important here) and redirects the user to a login page, or executes the original handler by calling its ServeHTTP() method.

Given the above, the function registration now would look like this:

Handlers can be wrapped this way in as many layers as needed and this approach is very flexible. Anything from setting headers to compressing output can be accomplished via a wrapper. Note also that we can pass in whatever arguments we need, for example our *model.Model.

### URL Parameters

Sooner or later we might want to rely on URL parameters, e.g. /person/3 where 3 is a person id. Go standard library doesn’t provide any support for this leaving it as an exercise for the developer. The software component responsible for this sort of thing is known as a Mux or “router” and it can be replaced by a custom implementation. A Mux also provides a ServeHTTP() method which means it satisfies the http.Handler interface, i.e. it is a handler.

A very popular implementation is the Gorilla Mux. It is easy to delegate entire sub-urls to the Gorilla Mux wherever more flexibility is needed. For example we can decide that everything from /person and below is handled by an instance of a Gorilla router and we want that to be all authenticated, which might look like this:

NB: I found that trailing slashes are important and the rules on when they are required are a bit confusing.

There are many other router/mux implementations out there, the beauty of not buying into any kind of a framework is that we can choose the one that works best for us or write our own (they are not difficult to implement).

### Asset Handling

One of the neatest things about Go is that a compiled program is a single binary not a big pile of files like it is with most scripting languages and even compiled ones. But if our program relies on assets (JS, CSS, image and other files), we would need to copy those over to the server at deployment time.

There is a way we can preserve the “one binary” characteristic of our program by including assets as part of the binary itself. For that there is the go-bindata project and its nephew go-bindata-assetfs.

Since packing assets into the binary is slightly beyond what go build can accomplish, we will need some kind of a script to take care of it. My personal preference is to use the tried and true make, and it is not uncommon to see Go projects come with a Makefile.

Here is a relevant example Makefile rule

The above rule creates a bindata.go file which will be placed in the same directory where main.go is and becomes part of package main. main.go will somehow know that assets are built-in and this is accomplished via an -ldflags "-X main.builtinAssets=\${ASSETS_DIR}" trick, which is a way to assign values to variables at compile time. This means that our code can now check for the value of builtinAssets to decide what to do, e.g.:

The second important thing is that we are defining a build tag called builtinassets. We are also telling go-bindata about it, what this means is “only compile me when builtinassets is set”, and this controls under which circumstances bindata.go (which contains our assets as Go code) is to actually be compiled.

### Pre-transpilation of JavaScript

Last, but not the least, I want to briefly mention packing of web assets. To describe it properly is enough material for a whole new series of posts, and this would really have nothing to do with Go. But I can at least list the following points.

• You might as well give in and install npm, and make a package.json file.

• Once npm is installed, it is trivial to install the Babel command-line compiler, babel-cli, which is one way to transpile JavaScript.

• A more complicated, frustrating, but ultimately more flexible method is to use webpack. Webpack will pre-transpile and do things like combine all JS into a single file as well as minimize it.

• I was surprised by how difficult it was to provide module import functionality in JavaScript. The problem is that there is an ES6 standard for import and export keywords, but there is no implementation, and even Babel assumes that something else implements it for you. In the end I settled on SystemJS. The complication with SystemJS is that now in-browser Babel transpilation needs to be something that SystemJS is aware of, so I had to use its Babel plugin for that. Webpack in turn (I think?) provides its own module support implementation, so SystemJS is not needed when assets are packed. Anyhow, it was all rather frustrating.

## Conclusion

I would say that in the set up I describe in this four part series Go absolutely shines, while JavaScript not so much. But once I got over the initial hurdle of getting it all to work, React/JSX was easy and perhaps even pleasant to work with.

That’s it for now, hope you find this useful.

# Tgres 0.10.0b - Time Series With Go and PostgreSQL

After nearly two years of hacking, I am tagging this version of Tgres as beta. It is functional and stable enough for people to try out and not feel like they are wasting their time. There is still a lot that could and should be improved, but at this point the most important thing is to get more people to check it out.

### What is Tgres?

Tgres is a Go program which can receive time series data via Graphite, Statsd protocols or an http pixel, store it in PostgreSQL, and provide Graphite-like access to the data in a way that is compatible with tools such as Grafana. You could think of it as a drop-in Graphite/Statsd replacement, though I’d rather avoid direct comparison, because the key feature of Tgres is that data is stored in PostgreSQL.

### Why PostgreSQL?

The “grand vision” for Tgres begins with the database. Relational databases have the most man-decades of any storage type invested into them, and PostgreSQL is probably the most advanced implementation presently in existence.

If you search for “relational databases and time series” (or some variation thereupon), you will come across the whole gamut of opinions (if not convictions) varying so widely it is but discouraging. This is because time series storage, while simple at first glance, is actually fraught with subtleties and ambiguities that can drive even the most patient of us up the wall.

### Avoid Solving the Storage Problem.

Someone once said that “anything is possible when you don’t know what you’re talking about”, and nowhere is it more evident than in data storage. File systems and relational databases trace their origin back to the late 1960s and over half a century later I doubt that any field experts would say “the storage problem is solved”. And so it seems almost foolish to suppose that by throwing together a key-value store and a concensus algorithm or some such it is possible to come up with something better? Instead of re-inventing storage, why not focus on how to structure the data in a way that is compatible with a storage implementation that we know works and scales reliably?

As part of the Tgres project, I thought it’d be interesting to get to the bottom of this. If not bottom, then at least deeper than most people dare to dive. I am not a mathematician or a statistician, nor am I a data scientist, whatever that means, but I think I understand enough about the various subjects involved, including programming, that I can come up with something more than just another off-the-cuff opinion.

And so now I think I can conclude definitively that time series data can be stored in a relational database very efficently, PostgreSQL in particular for its support for arrays. The general approach I described in a series of blogs starting with this one, Tgres uses the technique described in the last one. In my performance tests the Tgres/Postgres combination was so efficient it was possibly outperforming its time-series siblings.

The good news is that as a user you don’t need to think about the complexities of the data layout, Tgres takes care of it. Still I very much wish people would take more time to think about how to organize data in a tried and true solution like PostgreSQL before jumping ship into the murky waters of the “noSQL” ocean, lured by alternative storage sirens, big on promise but shy on delivery, only to drown where no one could come to the rescue.

### How else is Tgres different?

Tgres is a single program, a single binary which does everything (one of my favorite things about Go). It supports all of Graphite and Statsd protocols without having to run separate processes, there are no dependencies of any kind other than a PostgreSQL database. No need for Python, Node or a JVM, just the binary, the config file and access to a database.

And since the data is stored in Postgres, virtually all of the features of Postgres are available: from being able to query the data using real SQL with all the latest features, to replication, security, performance, back-ups and whatever else Postgres offers.

Another benefit of data being in a database is that it can be accessible to any application frameworks in Python, Ruby or whatever other language as just another database table. For example in Rails it might be as trivial as class Tv < ActiveRecord::Base; end et voilà, you have the data points as a model.

It should also be mentioned that Tgres requires no PostgreSQL extensions. This is because optimizing by implementing a custom extension which circumvents the PostgreSQL natural way of handling data means we are solving the storage problem again. PostgreSQL storage is not broken to begin with, no customization is necessary to handle time series.

In addition to being a standalone program, Tgres packages aim to be useful on their own as part of any other Go program. For example it is very easy to equip a Go application with Graphite capabilities by providing it access to a database and using the provided http handler. This also means that you can use a separate Tgres instance dedicated to querying data (perhaps from a downstream Potgres slave).

### Some Internals Overview

Internally, Tgres series identification is tag-based. The series are identified by a JSONB field which is a set of key/value pairs indexed using a GIN index. In Go, the JSONB field becomes a serde.Ident. Since the “outside” interface Tgres is presently mimicking is Graphite, which uses dot-separated series identifiers, all idents are made of just one tag “name”, but this will change as we expand the DSL.

Tgres stores data in evenly-spaced series. The conversion from the data as it comes in to its evenly-spaced form happens on-the-fly, using a weighted mean method, and the resulting stored rate is actually correct. This is similar to how RRDTool does it, but different from many other tools which simply discard all points except for last in the same series slot as I explained in this post.

Tgres maintains a (configurable) number of Round-Robin Archives (RRAs) of varying length and resolution for each series, this is an approach similar to RRDTool and Graphite Whisper as well. The conversion to evenly-spaced series happens in the rrd package.

Tgres does not store the original (unevenly spaced) data points. The rationale behind this is that for analytical value you always inevitably have to convert an uneven series to a regular one. The problem of storing the original data points is not a time-seires problem, the main challenge there is the ability to keep up with a massive influx of data, and this is what Hadoop, Cassandra, S3, BigQuery, etc are excellent at.

While Tgres code implements most of the Graphite functions, complete compatibility with the Graphite DSL is not a goal, and some functions will probably left uniplemented. In my opinion the Graphite DSL has a number of shortcomings by design. For example, the series names are not strings but are syntactically identifiers, i.e. there is no difference between scale(foo.bar, 10) and scale("foo.bar", 10), which is problematic in more than one way. The dot-names are ingrained into the DSL, and lots of functions take arguments denoting position within the dot-names, but they seem unnecessary. For example there is averageSeriesWithWildcards and sumSeriesWithWildcards, while it would be cleaner to have some kind of a wildcard() function which can be passed into average() or sum(). Another example is that Graphite does not support chaining (but Tgres already does), e.g. scale(average("foo.*"), 10) might be better as average("foo.*").scale(10). There are many more similar small grievances I have with the DSL, and in the end I think that the DSL ought to be revamped to be more like a real language (or perhaps just be a language, e.g. Go itself), exactly how hasn’t been crystalized just yet.

Tgres also aims to be a useful time-series processing Golang package (or a set of packages). This means that in Go the code also needs to be clean and readable, and that there ought to be a conceptual correspondence between the DSL and how one might to something at the lower level in Go. Again, the vision here is still blurry, and more thinking is required.

For Statsd functionality, the network protocol is supported by the tgres/statsd package while the aggregation is done by the tgres/aggregator. In addition, there is also support for “paced metrics” which let you aggregate data before it is passed on to the Tgres receiver and becomes a data point, which is useful in situations where you have some kind of an iteration that would otherwise generate millions of measurements per second.

The finest resolution for Tgres is a millisecond. Nanoseconds seems too small to be practical, though it shouldn’t be too hard to change it, as internally Tgres uses native Go types for time and duration - the milliseconds are the integers in the database.

When the Data points are received via the network, the job of parsing the network stuff is done by the code in the tgres/daemon package with some help from tgres/http and tgres/statsd, as well as potentially others (e.g. Python pickle decoding).

Once received and correctly parsed, they are passed on to the tgres/receiver. The receiver’s job is to check whether this series ident is known to us by checking the cache or that it needs to be loaded from the database or created. Once the appropriate series is found, the receiver updates the in-memory cache of the RRAs for the series (which causes the data points to be evenly spaced) as well as periodically flushes data points to the data base. The receiver also controls the aggregator of statsd metrics.

The database interface code is in the tgres/serde package which supports PostgreSQL or an in-memory database (useful in situations where persistence is not required or during testing).

When Tgres is queried for data, it loads it from the database into a variety of implementations of the Series interface in the tgres/series package as controlled by the tgres/dsl responsible for figuring out what is asked of it in the query.

In addition to all of the above, Tgres supports clustering, though this is highly experimental at this point. The idea is that a cluster of Tgres instances (all backed by the same database, at least for now) would split the series amongst themselves and forward data points to the node which is responsible for a particular series. The nodes are placed behind a load-balancer of some kind, and with this set up nodes can go in and out of the cluster without any overall downtime for maximum availability. The clustering logic lives in tgres/cluster.

This is an overly simplistic overview which hopefully conveys that there are a lot of pieces to Tgres.

## Future

In addition to a new/better DSL, there are lots of interesting ideas, and if you have any please chime in on Github.

One thing that is missing in the telemetry world is encryption, authentication and access control so that tools like Tgres could be used to store health data securely.

A useful feature might be interoperability with big data tools to store the original data points and perhaps provide means for pulling them out of BigQuery or whatever and replay them into series - this way we could change the resolution to anything at will.

Or little details like a series alias - so that a series could be renamed. The way this would work is you rename a series while keeping its old ident as an alias, then take your time to make sure all the agents send data under the new name, at which point the alias can go away.

Lots can also be done on the scalability front with improved clustering, sharding, etc.

## We Could Use Your Help

Last but not least, this is an Open Source project. It works best when people who share the vision also contribute to the project, and this is where you come in. If you’re interested in learning more about time series and databases, please check it out and feel free to contribute in any way you can!