In fact, I wrote this article last year but forgot to get it done and published!
Error handling is an interesting topic, earlier it wasn’t really much controversy around it but the advent of new techniques (Optionals etc) in combination with languages making… “interesting” choices (I am looking at you Go) has made this subject fairly hot. So what do we want in Spry??
Exceptions, errors etc are simply “not so often expected paths of execution”. Clarity of code can often be enhanced if the vanilla code path is clean of clutter for all these less expected paths. One common mechanism to achive this in some form is of course the try-catch style of Exeption handling.
I want most Spry code to be uncluttered. I also don’t want too many concepts in the language since Spry is meant to be minimalistic in nature.
One thing has already been added, I call it the catch-throw mechanism. The idea is to have it as a base for most of the rest of the call stack based mechanisms. For even more advanced stack manipulations the stack is gradually being reified, but that will be explored more when making the first Spry debugger.
There are quite a few ideas in current languages on error handling:
at:ifAbsent:
etc.Also found a nice article discussing some of the above.
In Spry we now have a basic catch-throw mechanism in place, and that is useful for making calls to handlers “up the stack”, not just errors. I started with the STTCPW, so… added a slot in the Activation record called “catcher”. Setting this slot to a code block/func installs a “guard” that will be invoked if a throw
is performed inside this activation, or in an activation below. This was simple to implement and acts as a reasonable primitive for error handling.
So… this example works in ispry
:
foo = func [
activation catcher: [echo ("Caught a ", :banana)]
echo "Throwing a banana..."
throw "banana"
]
foo
…will print:
Throwing a banana...
Caught a banana
If an activation record does not have a handler, throw will keep searching upwards and if none is found it will currently do a hard process exit 1.
Typically though, the top activation record should have a catch all handler installed and do something reasonable.
As in Smalltalk (and some other languages) throwing an Exception does not unwind the call stack, so a handler doing a normal return will return back to the throw! Unless it calls activation unwind
first, if so, the return will instead go to caller of the record where the handler is installed (NOT YET IMPLEMENTED). In fact, the principle of least surprise may mean we should swap these behaviors :)
With this in place we can implement catch:
(resembling try/catch) like:
catch: = method [activation catcher: :handler do self]
…so what does this do? It’s a method, so it takes a receiver block on the left. When we run the method we are in a new Activation, and we set the catcher to the given handler block passed as the argument, and then we do self
in order to execute the receiver block that may possibly throw.
This enables “scoped” use familiar from most languages. Below we see it in action:
foo = func [
# This is an activation level catcher, just to show we never reach it
activation catcher: [echo "we never reach this"]
# Here we call `catch:` on a block with a handler block as argument.
# This is the classic try-catch style found in many languages.
[
echo "Throwing a banana..."
throw "banana"
] catch: [
echo ("Caught a ", :banana)
]
]
foo
…and it should print the same as before and never reach the catcher in foo :)
Going back to the general problem of error handling, my feelings so far:
if err != nil {...}
is NOT to my liking. I can understand the philosophy, but sorry, don’t like it. Especially not in the minimalistic style that Spry tries to adhere to.So… where to go from here? Ideas?
regards, Göran
]]>So for me, sorry, Nano is not a project “of the future”!
I accidentally got involved in the DeFi world in the CORE project, which runs on top of Ethereum. My contribution was just making a cross Discord/Telegram chatbot, RoboCORE, and writing a bit about “floor price calculations” but the whole DeFi landscape convinced me that smart contracts are an essential ingredient in a modern crypto currency platform.
Real true scalability is also mandatory, which IMHO really has to boil down to sharding, even if novel faster consensus models are damn cool like Avalance has (the Snowflake consensus protocol).
So back to googling and reading … and there are two projects that so far stand out in my book:
…ok, so there are tons of so called “eth killers” out there. I have looked at least a little bit on most of them by now.
I find Elrond to be one of the most promising. Strong solid development focus, just like Nano always had. But also good marketing which Nano sucked at, yup, it’s the truth. Sub second transactions? No, but Elrond does them in 5-6 seconds which is just below the magical threshold in order to be useful for payments.
Zero fees? No, not zero, but very very low. And while Nano’s idea of using a bit of PoW as payment is pretty neat, it still suffers from the “spam problem” (although recent developments may have solved it, unsure), so a low fee is probably a smarter more practical route to take. Elrond can also shift the fee to a third party so that end users can use a system feeless, that’s clever and very useful.
The Elrond team is primarily in Romania but seems to be doing a great job in maintaining a good community. They also realize that the end users are key - so they also made Maiar which was relatively recently released. It’s a very slick and smooth mobile wallet for EGLD (eGold) which also maps phone numbers and so called “herotags” to accounts. That last part is a brilliant move by the Elrond team.
When we made Canoe for Nano we also implemented a system for “aliases” (like herotags, we even used “@” as prefix) but… since it was not “on chain” it suffered from fragmentation with several different implementations and security risks in abundance. So putting this on chain is really the ONLY solution, and it is a true enabler.
Maiar takes your phone number, hashes it and then associates the hash with the Elrond account of your wallet, and stores it on chain. Net effect is that as Maiar (or other Elrond wallets) virally spreads to more people - you will be able to see which of your existing contacts in your phone already has an Elrond account and can be sent EGLD. Very nice indeed. At the same time, the phone number is not revealed, since it’s a hash being stored.
Elrond seems to have a very good base with a properly sharded design up and running on mainnet and several pieces being put in place. It also made a true run in value early 2021, from about $25 and quickly up over $210. It’s probably a good bet it will continue climbing, but hey, not financial advice of course ;)
Another contender in the ethereum killer space is the NEAR protocol. It’s a technically very capable project that has a more “grassroot, down to earth” feeling. Not as much marketing as Elrond, which to be honest sometimes gets a bit over the top for my taste. I mean, how many meaningful “partnerships” can a project really have?
NEAR has that “community feeling” that I like a lot, even the website is a .org and not a .com. No nonsense, tech first, developers first. And technically it might be one of the strongest projects around, even Vitalik Buterin has acknowledged that NEAR may present itself as a worthy challenger to Ethereum 2. Very fast finality, a solid sharded design. And to be frank, I love the website style with documentation and developer focus!
I haven’t dwelved that deeply into NEAR yet, but they also have a clever DNS based account naming model similar to EGLD. And no, there is no official mobile wallet (there are other multi currency wallets supporting NEAR), but… in some ways it may be a strategy more in line with the grassroot model of NEAR. NEAR focuses on the platform and might be better off leaving things like mobile wallets to external parties.
There are lots and lots of new smart contract platforms out there vying for a bit of the “Ethereum cake”. Will Ethereum 2 deliver and make all other obsolete? Will the future have room for 10 or 20 different platforms? Or will 2-3 of these new kids on the block (sorry for the pun) step up and take over?
I have no clue but it seems to me that:
Elrond is at the time of writing at $3,315,197,861 in market cap with 55% in circulation. NEAR is at $1,770,738,752 with 36% in circulation. Elrond is 84x smaller market cap that current Ethereum.
Ok, so sorry, this article doesn’t go into that much technical details. You will just have to learn yourself!
And no, nothing of the above is financial advice. ;)
]]>A list of common misconceptions are:
And a whole bucket of other misconceptions - just pile it on! Let us start with an explanation of what floor price ACTUALLY IS and then go through the above list…
The floor price is a technical mathematical price that CORE can never ever go below as long as:
These three pillars were listed in my first article, but I list them here again. These three facts work together to create the price floor.
Ok, so… first of all - if we could mint more CORE, then of course price would go lower. If you suddenly double the supply of COREs the price would promptly go to half. So a fixed supply is very important. Secondly, the insight that the price on Uniswap is not a price set by the “sellers”, it’s not governed by any human at all, it’s just governed by math. There are tons of articles on how this works and my first article did explain it carefully.
How does the price go down in the first place? If people don’t want to hold CORE and decide to sell it, then price goes down. Let’s pretend the CORE holder world consists of Lisa, Peter and John. They all have 1000 CORE each. The rest is in the Uniswap trading pairs. If Lisa decides to sell all her CORE it will end up inside one of the pairs, and she will get ETH or CBTC in return. The price of CORE goes down (Uniswap math, pillar two), as well as the pile of ETH (or CBTC) in the pair she sold into (Uniswap math, pillar two), since she got ETH (or CBTC) in return.
If John and Peter also sells all their CORE then all CORE is suddenly in the trading pairs. No CORE is held by any other party. Since there is no more CORE, remember - we can’t mint more CORE, then there is no more CORE to sell into the pairs. The maximum of CORE that can be in the pairs, is thus 10000. See pillar one above.
At this point, is there ETH and CBTC still left inside the pairs? Yes there is! The sum of all that ETH and CBTC is what we like to call the TVPL - Total Value Permanently Locked. The word “permanently” here implies that this big chunk of value can never ever be removed from the pairs, because the only way to “get” that ETH and CBTC is to sell CORE into the pair, but there is no more CORE to sell!
Ok, but what if … the people who provided the liquidity into the pairs in the first place came and removed it? Ah. Right. They can’t. That’s the third pillar above. The contracts of CORE have that distinguishing feature - that once you added liquidity and got a Liquidity Pool token in return - then you can’t reverse it.
I hope it’s clear at this point that if all CORE is sold, we would end up at the lowest price. The fun part is that this price can be calculated, because Uniswap has such a clean mathematical pricing formula. It’s not governed by supply and demand - it’s governed by simple plain math.
Now… let’s take a look at all misconceptions.
There is no human involved in “deciding” or “setting” the floor price, it’s just a mathematical boundary. But the value of CORE is up to us to decide, that depends on expectations going forward and so on, just like the value of a stock or some other valuable. The only reason CORE to actually ever get anywhere close to the floor price would be a total freaking disaster in the CORE eco system, making everyone feel that CORE is simply worthless! And everyone would decide to panic sell. I guess one could say that other clones of CORE have “tested the floor price concept”, by failing miserably to deliver on promises and thus people have decided that hey, this is a worthless clone - and sold. But yes, even in such a disaster - the price simply can’t go to zero.
An analogy would be if we could go to a vault, and put in 1 ETH, and get an IOU-note back. And at any point in time I can go back to the vault and exchange that IOU back to the 1 ETH. Now… I can sell that IOU to you, but I would be daft if I sold it for 0.5 ETH, because you can always go and exchange it for 1 ETH. So … in a similar way CORE also has this base value locked away, the TVPL, so that you can always sell your CORE for at least the floor price - around $633 at the time of writing.
But CORE is a project, en endeavour, with plans going forward and a team and community delivering on the plans. Thus it has a promise of future earnings and that is what drives the current price. The floor is of course an interesting factor in all this, the fact that there even is a floor is a strength of CORE.
No. There is no human that has made any kind of estimation or valuation here. The floor price is just a mathematical truth. But sure, it does depend on certain factors - like we need the Internet to keep working and the Ethereum blockchain must still work, since the CORE ecosystem is built on Ethereum smart contracts.
The dependence on Uniswap is quite small and could relatively easily be replaced. But yes, if World War III kills the Internet - then CORE is quite worthless. But so would lots of other currencies be at that point.
No, no, no. A support level is just an imaginary construct - it’s a “level” we humans see when we study a chart. We notice perhaps that an asset tends to stay above a certain price over a period of time, and then one of us decides to call that a “support level”. Technical analysis has nothing to do with the floor price.
I am trying to lose some weight, I have been hovering around 88 kg but I want to get down to 83. Evidently there is some kind of “support level” around 88 kg :) but… depending on choice of mathematical models I could argue with decent certainty that floor mass for me is 12.5 kg. I promise that it’s impossible for me to have less mass than that, because that is the mass of my skeleton. Ok, so first draft of this article I claimed floor mass to be 0, but it felt a bit silly. The clever reader noticed me choosing the word “mass” instead of “weight” since I actually do weigh 0 kg in space, but my mass is for sure more than 12.5 kg. Perhaps not the best analogy to pick!
This is interesting though. There are already other markets for CORE than the two trading pairs on Uniswap. Hotbit and Gate are two CEXes evidently offering trading in CORE, but they all currently trade at a slightly higher price than on Uniswap. Those markets follow supply and demand, using an order book, so don’t play by the same rules so to speak. But arbitrage ought to work in the direction of making the price similar. But remember - floor price is the ultimate disaster scenario. What would happen? Noone would want to buy CORE on those CEXes in such a panic sell off, so all sellers would flock back to the trusty Uniswap because Uniswap always buys from you! So all CORE would still flow right back into the AMM pairs, and we would still hit the same old floor. So other non AMM markets have no real impact on the floor price, at least to my current understanding.
But any new AMM pairs using Uniswap pricing model definitely impacts the floor price! When the next third pair is launched the floor price will take yet another jump upwards. And if any of the already existing CORE pairs on Uniswap suddenly starts getting liquidity - they would impact floor too.
Oh yes it can! When we had a single CORE-ETH pair then the floor was around 1.13 ETH. It could never go below 1.13 ETH. But the price of ETH in USD varies, so if we looked at the floor price in USD - then sure, it varied over time just as much as ETH varied!
But now we also have a second pair in CBTC. So the current floor price is actually part ETH and part BTC. This means the current floor price, which is around 1.66 ETH (it took a healthy jump from around 1.13 to 1.6 ETH when second pair was introduced) - should in theory actually be more stable, since one can argue that a portfolio consisting of part ETH and part BTC will have a more stable or balanced total value, than if it was only ETH or only BTC.
So it can go down in USD. But this movement is exactly the same as the movement of ETH and BTC.
That was true, when we had just the CORE-ETH pair. Now it’s not true anymore, because the floor value is part BTC also, so if BTC drops in value compared to ETH, then the total floor valued in ETH would go down. Similarly, if we valued it in BTC it could go down if ETH went down, in relation to BTC. Argh! :) But it’s not a negative, on the contrary. The current mix of the floor value makes it more resistant to movements. And with the third DAI pair coming, it will be even more stable.
All of this is just per my own understanding of the concepts involved and should DEFINITELY NOT be taken as any kind of financial advice :) But if you ask me - the floor price is a great thing and it makes CORE quite unique. It’s the solid bed rock underneath the house. We don’t want to sleep directly on it, but we sure like it’s there to keep the house steady.
Finally, here is a trivial floor.ods spreadsheet where you can experiment with new pairs. By estimating amount of CORE in the existing and any new pairs the spreadsheet shows the new floor price.
Cheers, Göran
]]>Find q
. This is done by:
q = (10000 - sum-of-all-CORE-in-all-pairs) / (sum-of-all-CORE-in-all-pairs)
Then take one pair, for example the CORE-ETH pair, and calculate floor as:
floor = (poolCORE * poolETH) / ((poolCORE + 0.997 * (q * poolCORE)) * (poolCORE + q * poolCORE))
And that’s it, so again, using the numbers at the time of writing (ask RoboCORE with command !s in Discord or /s in Telegram):
q = (10000 - (3451 + 630)) / (3451 + 630) = 1.45037980887
And then:
floor = (3451 * 33578) / ((3451 + 0.997 * (1.45037980887 * 3451)) * (3451 + 1.45037980887 * 3451)) = 1.62336028634
At this moment RoboCORE says floor is 1.6228 so yeah, 1.62336028634 is close enough!
Cheers, Göran
]]>I showed the mechanisms involved and how to calculate the floor price, if we only have a single trading pair to worry about. I have also solved the equation for two trading pairs, that equation is currently used in RoboCORE with success.
But how to handle three pairs or more? Complexity seemed to spin out of control, but… it turns out there is a simple solution!
…but before we get to the beautiful simple solution that works for n
number of pairs, we need to really bask ourselves in the glory of complexity in solving it for two pairs using the obvious math used from the solution for one pair.
Without explaining the math too carefully (sorry), the equations from one pair suddenly got a bit more complex. Let’s presume we are going to sell X
COREs back into pair1, and the rest into pair2 - and afterwards the two pairs should have the same price, which will be the floor price.
For pair one we can formulate the new amount of ETH in the pool as newPoolETH = k / (poolCORE + (X * 0.997))
taking the Uniswap fee into account. And using the fact that the rest needs to get sold into pair2, we then get newPoolWBTC = k2 / (poolCORE2 + (10000 - poolCORE2 - poolCORE - X) * 0.997)
.
After these two trades are done the price of both pools should be equal to each other. The two price formulas are price1 = newPoolETH / (poolCORE + X)
and price2 = (newPoolWBTC / (10000 - poolCORE - X)) * priceBTCinETH
. So let’s put them equal to each other:
newPoolETH / (poolCORE + X) = (newPoolWBTC / (10000 - poolCORE - X)) * priceBTCinETH
…and we can expand newPoolETH
and newPoolWBTC
from the previous equations giving us this final big one:
(k / (poolCORE + (X * 0.997))) / (poolCORE + X) = (k2 / (poolCORE2 + (10000 - poolCORE2 - poolCORE - X) * 0.997)) / (10000 - poolCORE - X) * priceBTCinETH
As a careful reader now notices we have only one unknown in the equation, and that is X
! Yes! So we need to solve for X
. And… that’s where it get’s really messy! I am not going further here, but let’s just conclude that it can be done and that it turns into a second order equation with two potential solutions for X
. Often we can ignore one of the solutions as impossible in practice. This math is currently used by RoboCORE and it calculates the floor price exactly for the two current CORE pairs we have.
Someone came up with the idea on Discord or Telegram (don’t recall which one) that… perhaps all the pairs could be combined into a single pair, and thus the problem would be trivial even for many, many pairs? It sounded nice, but instinctively I felt that the “k constant” would sort of get lost in such a way of thinking.
But it was intriguing to find a solution for n
number of pairs (since the third pair is probably juuuuust around the corner) so I started playing around in LibreOffice Calc. After an hour of juggling numbers and mainly goofing around it dawned upon me that… there is a solution that is so simple it’s almost … unbelievable!
Let’s pretend we have n
pairs. The problem we need to solve is still how to distribute the remainder of CORE outside of the pairs into the n
pairs so that they all end up having the same price (in ETH for example, or USD). There is an assumption we can make that can be used to make this much simpler and that is presuming that the pairs have the same price already. At least within say 1-2% or whatever small margin we allow. If they wouldn’t, then someone would arbitrage and make them get close again. So the assumption seems fine!
Aha! So the problem suddenly is easier. We just need to distribute our CORE so that we change the price with the same factor for all the pairs. Because if they already are the same price, and we change the price with the same factor, then they damn well ought to still be the same after we have poured our last COREs into them!
Ok, how was the price formulated for a pair now again? The current price is just poolCORE / poolZZZ
(where ZZZ is ETH or CBTC at the moment, depending on pair). Ok, that is not helpful because the new price depends on the amount of CORE we sell into the pair. So, we need to figure out how much CORE to sell into each pair, to make sure the price is changed with the same factor.
Let us recall the Uniswap math from above for price1. It’s the left side of the long equation above:
price = (k / (poolCORE + (X * 0.997))) / (poolCORE + X)
This is the price of a pair after we have added X
amount of CORE to it. Note that it only depends on k
and poolCORE
! The different pairs will have different k
. Hmmm, ok. For simplicity, let’s rename poolCORE
to just C
for CORE.
Let us rewrite that expression to:
price = k / ((C + 0.997 * X)(C + X))
And X
… let’s pretend X = q * C
, or in other words, we will add a number of CORE that is proportional to the number of CORE already in the pair - the proportion we call q
. So if pair has 100 CORE and q
is 0.5, then we will add 50 CORE for that pair. Now we can rewrite the formula for price once more, replacing X
with q * C
so it now looks like:
price = k / ((C + qC * 0.997)(C + qC))
…and we can then extract C
from the expression so that it now looks like:
price = k / (C^2 * (1 + 0.997q)(1 + q))
At this point you are staring blankly at the screen thinking… he’s gone mad! The mad hatter! :) Let’s take a look and see what the heck we have arrived at.
The formula says that the new price for any of the pairs will be it’s k
divided by C^2
(the amount of CORE in the pair) multiplied with “an expression of q”. And q
was a proportion of the existing CORE in the pair. Uhuh.
So… if we can use the same q
for all our pairs, then … the price of all our pairs will be divided by the same number, namely (1 + 0.997q)(1 + q)
. And since their price were the same before, then dividing by the same number will make them all the same again!
Muuhhaaahahaa! …insert Evil laughter and Mad Hatter eyes glistening here… we are close now! So we just need to find a q
that makes sure we put all our “outside” CORE into the pairs.
Let’s grab an example! Let’s say we have 5 pairs and they have 200, 3000, 250, 600 and 850 COREs each. This makes a total of 4900 COREs and that means we have 5100 COREs free “outside” of the pairs. Now.. we just need a q
to consume all 5100 COREs!
So 5100 = q*200 + q*3000 + q*250 + q*850
which is 5100 = q(200 + 3000 + 250 + 850)
and yes, q = 5100 / (200 + 3000 + 250 + 850)
. Bam! We have q
and thus we can trivially calculate the new price of ANY pair, and funny enough - we only need to calculate the new price for ONE pair, because all will have the same price!
You are thinking… he is just blowing smoke up our… something. Let us apply the above to the current two pairs! I asked RoboCORE at the time of writing and he said the CORE-ETH pair has 3471 CORE & 33349 ETH, and the CORE-CBTC pair has 638 CORE & 179 CBTC.
This gives price1 = 33349 / 3471 = 9.60789 ETH * 388.80 = $3735
and price2 = 179 / 638 = 0.28056 BTC * 13761.76 = $3861
. Hmmm, oops, diff of almost 3% but yeah, let’s pretend they are the same! :)
Now… Robo says floor price is 1.6325 ETH. Can we find the same number using the q approach?
So we find q using q = (10000 - 3471 - 638 - 1280) / (3471 + 638 + 1280) = 1.43368216111
. We then take q * 3471 = 4976.3
CORE and add it to pair1 and we add the rest (5891 - 4976.3
) to pair2, which should also be same as q * 638 = 914.7
(yup it is). Now, the new price of pair1 should be (k / (poolCORE + (X * 0.997))) / (poolCORE + X)
so price = (3471 * 33349) / (3471 + 0.997 * 4976.3)(3471 + 4976.3)
. And it yields 1.625 ETH! Close enough!
Let’s look at pair2, it should end up at the same price: price = (638 * 179) / (638 + 0.997 * 914.7)(638 + 914.7)
. It’s 0.04745334321! Eh.. what the hell? Oh, in BTC :) So … multiply by price of WBTC in ETH which is 34.22899 and we have 1.6243 ETH!
And that’s where I say … Yippee ki-yay muth… I mean, Yippee ki-yay!
]]>A concept central to CORE is what is known as “the floor price” and in developing RoboCORE I had to learn how to calculate it properly. This concept is a bit hard to grasp, so this article tries to clear up the fog!
What is a floor price? It’s simply the lowest price CORE can ever reach. It’s not a floor in fiat, like the USD, but rather a floor measured in ETH (or BTC), the assets CORE is traded against. So if those drop to zero in worth, then there is nothing CORE can do about that!
Earlier CORE was available only in a single trading pair CORE-ETH on Uniswap. Let’s pretend it still is, for the sake of most of this article, and we can expand to two trading pairs at the end.
There are three main mechanisms that together give CORE it’s floor price:
Ok, so there is only 10000 CORE, ever. Some of these CORE are in the trading pair, at the moment of writing there is 3333 CORE there, and 34667 ETH. So “outside” of the pair, in the hands of people, there must be 10000 - 3333 = 6667
COREs. Again, disregarding the second trading pair that just recently was started, let us just pretend that hasn’t happened :)
NOTE: I sometimes write pair and sometimes pool. It’s the same thing.
Some of you may even be unaware of how a Uniswap pair works. The simple mental image is that it is a trading pair of two coins. In this case CORE and ETH. It let’s you buy or sell CORE against ETH. And it does so without an order book, this is the beauty of Uniswap and so called decentralized exchanges. The pair itself consists of two piles of money, one pile of CORE and one pile of ETH. These two piles together is often referred to as a “pool”. The amount of money in the two piles, summed together, is called the liquidity of the pair. Trading is done by “swapping”, you basically put in CORE and get ETH out, or vice versa.
Now… mechanism number 2 says we can not remove liquidity from the pair. This is essential and means that the ETH in the pair can only be “removed” from the pair by buying it, using CORE. There is no other way, noone can just remove both chunks of money. The fact that liquidity is locked in is a key novel feature of the CORE trading pairs. The CORE project invented this idea and it has profound effects, including creating the price floor.
Then we arrive at mechanism number 3. What is the price of a Uniswap trading pair? It’s trivial, the price at all times is simply <pooledETH> / <pooledCORE>
. Simple!
At the time of writing price = 34667 / 3333 = 10.40 ETH/CORE
. Ok, given this insight, this should mean that the lowest price is reached by increasing number of CORE in the pair and decreasing number of ETH in the pair. And the only way we can do that, is by selling CORE into the pair so we can pull out ETH. Let this sink in since it’s fundamental. We reach the lowest price when we have the highest amount of CORE and the lowest amount of ETH in the pair.
So when I sell 10 CORE into the pair, the price will go down. If I sell 10 more it will go down even more. If I sell ALL CORE THERE IS outside the pair, 6667 CORE, then the price should be down at the floor price, because there is no more CORE to put into the pool!
A lot of people start fiddling with the Unsiwap trading dialog at this point, entering “6667” and seeing the price Uniswap would buy those COREs for, and mistakenly conclude this price to be the floor price. It is NOT. That’s the price you would get for selling 6667 CORE into the pair, but it is NOT the price CORE will have after you sold! So just drop that notion.
The math that comes below will also show how that particular price is calculated by Uniswap, but it’s still an uninteresting price - unless you really do have 6667 CORE to sell. The interesting price is what CORE will cost AFTER all CORE has been dumped into the pool, because if that would indeed occur - it would be lots and lots of little dumps, until the very last CORE was dumped. It would never be a single Megalodon dumping 6667 CORE.
So at this point I hope I have convinced you the reader that CORE will have it’s lowest price when ALL CORE is in the trading pair. All 10000 of them. The problem is, we don’t know how much ETH is left in the pair at that time. If we knew, then the price would simply be that amount of ETH divided by 10000, right?
Going back to Uniswap rules, there is a really slick rule that we must understand and that is the rule of “keeping k constant”. All Uniswap pairs abide by this rule, and it’s also the rule that figures out the price when you enter an amount to sell or buy in the Uniswap trading dialog.
Now what is k
? k
is the name they have given to the mathematical product of the pooled assets. So k = <pooledETH> * <pooledCORE>
. So right now k = 3333 * 34667 = 115545111
. It’s just a number, not really something that can be interpreted to mean anything all by itself. But Uniswap will make sure it stays constant after each trade. This is what makes Uniswap tick. If k is 115545111 now, it should be so even after the next trade.
This means, that after you sell say… 100 CORE into the pool, k should still be 115545111. But wait a minute, if you sell 100 CORE, then CORE in the pool will go from 3333 to 3433 so… in order for k to stay constant the amount of ETH in the pool must be lowered. Well, duh, of course, because if you sell 100 CORE you want some ETH in return! So this determines the amount of ETH you will get for your 100 CORE, by determining how much ETH should be left in the pool.
Let’s do the numbers. k = 3333 * 34667
. You sell 100 CORE. So now k = 3433 * x
where x is the amount of ETH that should still be in the pool. But k needs to stay constant so we can replace k in the equation with the value k had BEFORE the trade, so this gives us 115545111 = 3433 * x
. And ok, we can now solve for x
so x = 115545111 / 3433
. What was x
now again? It was the amount of ETH left in the pool after the trade. It turns out to be 33657.2
. Since we had 34667
before, that must mean you got 34667 - 33657.2 = 1009.8
ETH for your 100 CORE. And the price ended thus up being 1009.8 / 100
which is 10.098 ETH/CORE. But the price now, after you sold, is actually 33657.2 / 3433 = 9.804 ETH/CORE
! Aha! That’s interesting in itself, the price you get is not the same as the new price right after your transaction.
So this is how Uniswap figures out how much ETH you will get when you sell CORE! And of course the other way around too. And this obviously then also gives us amount of ETH still left in the pool.
Finally we have reached the thought experiment - what happens if we sell ALL available CORE outside the pool into the pool? And… is the end result the same if we sell it all in one big swoop, or if 1000s of people do it with 1000s of small paper cuts? The answer is, it does not matter how many trades we do it in, which is kinda logical. The k must remain at 115545111
at all times, it does not matter how many trades we use.
So yes, let’s see what happens if we sell ALL of CORE into the pair. It’s just the same as above, but this time we sell 6667 CORE instead of 100. Again, we want to know x - the amount of ETH left in the pool. Given k, this must be x = 115545111 / 10000
. So we would have 11554.5111 ETH left in the pool. And then the new price is simply price = 11554.5111 / 10000 = 1.15545111 ETH/CORE
.
This is our floor price. But do note it is a floor price in ETH. So if ETH goes down in fiat value, that means CORE’s floor in fiat also goes down, obviously. But it will NEVER EVER go down in ETH value below 1.15545111 ETH.
Ok, so indeed, we cheated with that part too. Uniswap takes a fee and that part goes back into the pool as added liquidity. This actually means that the amount of ETH slowly grows a little bit in the pair, slowly growing the floor price. This means that instead of doing x = oldk / (pooledCORE + addedCORE)
we need to do x = oldk / (pooledCORE + (addedCORE * 0.997))
instead. So when calculating x, which is the amount of ETH left in the pool, we divide with a number slightly lower, which will make the amount of ETH left in the pool slightly higher (and thus the amount of ETH you receive for your CORE, a bit less). This means the ETH pool grows a little teeny bit, and k actually grows a little teeny bit, and thus floor price also grows a little teeny bit, with every trade. RoboCORE takes this into account, but for simple calculations it does not change the outcome much at all.
Ah, yes. This makes it all much more complex! In the next article I will explain how to deal with two pairs, but it basically goes like this - instead of pouring all 10000 CORE into a single pair, we now need to do a balancing act. We need to pour some CORE into the CORE-ETH pair, and some into the new CORE-CBTC pair until there is no more CORE to pour.
So at that point, all 10000 CORE should be distributed in some way, between the two pairs. But how do we know how much we should put into each pair? Well, given that arbitrage would occur any stable floor price would have to be a price that is the same for both pairs. So the math problem boils down to finding the distribution of all 10000 CORE between the two pairs - that make them have the same price.
Without explaining in this article how we calculate that, I can show the current numbers. As we saw above we have 3333 CORE in the CORE-ETH pool. And there is now 615 CORE in the CORE-CBTC pool. Calculations say we would find the common price if we put 5048 COREs into the CORE-ETH pair and the rest, 1004 into the second pool. So 8381 and 1619 CORE respectively.
This makes a floor substantially higher, at around 1.63 ETH right now. One last final observation - with two pools the floor value can actually go down when valuated in ETH. Before it could not do that, but now with the second pair we need to take the CBTC-ETH rate into account also. So current floor does not only rely on ETH but also on CBTC, and that means the relationship between BTC and ETH will affect the floor, and subsequently the floor price valuated in ETH only can actually go down.
That’s all for today, keep HODLING :)
]]>So I am now trying to “clean house” on where Spry stands today. This sweep through the old articles is a first step, then I will update the language manual to be 100% in sync with the implementation.
Let’s go through the articles from the beginning!
Things wrong in article one:
funci
is now called method
, indicating it is a message to a receiver on the leftif
as been changed to the more Smalltalkish then:else:
(but shorter than ifTrue:ifFalse:
and reads nice)return
is now ^
just like in SmalltalkIn article two:
$
-prefix, not ^
(which is return). This actually looks kinda ok, it makes them resemble “variables” from other languages..
and ..
have been removed and replaced with another way to assign.In article three:
ifelse
, instead we have then:else:
, else:then:
, else:
, then:
.[:x :y ^[x + y]]
instead of [^[:x + :y]]
. So it will probably have to stay.?
is not used anymore, we can use set?
to check if an unevaluated word is bound, like x set?
.
and ..
):
x
- look in locals and then all the way out.@x
- look in closest surrounding Map (self).&
has been changed to Smalltalk style ,
for concatenation, but leaning towards +
.undef
vs nil
is slightly different:
nil
is a value that means “no value”undef
has been removed, it was a fun experiment?
-marks are used for funcs/methods that return booleans, convention. !
-mark is still unused.self
is as the receiver for methods. It can be anything.context
is locals
, returns the Map of the local scope.activation
returns the current activation, not yet explored much but it’s there!In article four:
bindings
word is now locals
.object
is partly the way it works, yes, it takes a Map as argument and returns an “object”. But… this is done using tags that’s not described in this article.In article five:
In article six:
Foo::bar
first finds Foo, and then looks in it. So it works for Maps in general, not just modules. And yes, you can then shadow a module with a global.@x
syntax. Since self
is now bound to the receiver of methods, this resolves members also. One issue though, if you use blocks (and not funcs) then nested blocks in a method sent as parameters to other methods will obviously not resolve to the lexical self, but to the receiving self. Changing the block to a func solves this.In article nine
In article ten
And finally, in article twelve
:=
for reassignment. If no existing binding is found there will be an error (not fixed yet). So =
will work fine for single assignment, and will assign in local scope. And :=
signify reassignment and will lookup outwards before assigning.undef
. It felt neat but get’s confusing. Maps can still hold nil, it’s a valid value - if you want to check for a missing binding you will have to use explicit calls to do it instead, just as in Smalltalk..
and ..
scoping words. They can instead be implemented as direct access to locals
or activation parent lookup:
etc.catch:
, throw
and try:catch:
mechanism but more to come.More on error handling in the next article!
]]>To spice it up, for no specific reason at all, we are doing it all inside a Linux Container - a fast virtual environment to work in. It’s just a nice way to have a clean environment and to ensure that you as a reader see the same results as I do.
You can of course just skip the part on LXC and go directly to Nim fun. :)
NOTE: The following presumes you are on a Ubuntu box, virtual should work fine.
Linux Containers let’s us run an isolated full Linux system inside a Linux host, kinda like KVM/Virtualbox but much more lightweight, similar to Docker. Contrary to Docker though, LXC is not constrained to a single process. Instead it behaves like a full VM which is much more what I want!
LXD is then a REST based daemon sitting on top of LXC that also gives us nice CLI tools operating against the daemon. See this nice blog article series on LXD version 2.0. Let’s install LXD and get us a clean spanking new Ubuntu 17.04 environment!
NOTE: More detailed steps are found here and a cheat sheet
sudo apt install lxd lxd-client zfsutils-linux
newgrp lxd
Then step through a bunch of questions, just using defaults work fine:
sudo lxd init
So that dance felt long, but… it was worth it!
Now we can fire up a fresh Ubuntu, say version 17.04, and call it nim
:
lxc launch ubuntu:17.04 nim
We can now see it’s running:
lxc list
And we can get a root shell inside it:
lxc exec nim -- bash
But better to login properly as the ubuntu
user:
lxc exec nim -- su --login ubuntu
Today the preferred way to install Nim on Linux is to use choosenim, a neat toolchain multiplexer which makes it easy to switch between different versions of the Nim compiler. First we install GCC though, needed by choosenim:
sudo apt install gcc
Then we can do the dance to install choosenim and nim:
curl https://nim-lang.org/choosenim/init.sh -sSf | sh
echo "export PATH=~/.nimble/bin:\$PATH" >> ~/.bashrc
export PATH=~/.nimble/bin:$PATH
And we should have the Nim compiler in our path:
ubuntu@nim:~$ nim --version
Nim Compiler Version 0.17.2 (2017-09-07) [Linux: amd64]
Copyright (c) 2006-2017 by Andreas Rumpf
git hash: 811fbdafd958443ddac98ad58c77245860b38620
active boot switches: -d:release
Allright! Time to make a small Nim program called “moni” - don’t ask why. First create a directory to work in, obviously we should use git etc, but I leave that to you. We also run nimble init
to get a skeleton of a so called .nimble
file. Nimble is the “npm” of the Nim ecosystem. And a nimble file is similar to packages.json
for npm.
mkdir moni && cd moni
nimble init
Now, let’s add some more lines to moni.nimble
, starting with these three in the top section:
binDir = "bin"
bin = @["moni"]
skipExt = @["nim"]
This tells nimble that this package produces binaries and will put them in the directory bin
when building. We also tell it that we have a list of binaries, the syntax for a seq
in Nim, which is a dynamic array, looks like @[ a, b, ... c ]
. So we add "moni"
to that list, the executable’s name.
Finally we also tell Nimble that when this package later is installed, skip installing all .nim
files, since we are not making a Nim library, we only want the compiled executable to be installed.
Let’s also add a dependency called docopt
which is a really nice Nim library for parsing command line arguments, to the bottom list of dependencies:
requires "docopt"
The full file should now look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Ok, and finally, let’s write some code. To begin with the program will just parse out arguments, and can show help, save this as moni.nim
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
Time to compile it!
If we want nimble to suck down dependencies automatically for us, then we build using nimble, it will use the moni.nimble
to figure out what to do:
nimble build
And we can then run the binary:
./bin/moni
If we supply a topic and payload we can see default values for options:
./bin/moni topic payload
We can also compile the moni.nim
file directly, simply using the nim compiler - but that would have failed initially since we didn’t have the docopt
dependency installed. But do try it now:
nim c moni.nim
The nim compiler will however put the binary in your current directory, not in bin
.
Ok, let’s get serious and add some real MQTT code into this. First add a dependency in moni.nimble
to the Nim wrapper of the PAHO MQTT C library, by adding the following line at the bottom of moni.nimble
:
requires "https://github.com/barnybug/nim-mqtt"
So with nimble we can require using direct URLs to git or mercurial repositories as well, we are not limited to the pulished known packages in the Nimble catalog. Then make the code look like this instead:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
|
A few quick remarks about the code:
$something
, that’s Nim’s way of saying something.toString()
&
that’s string concatenation.connect
proc there is a variable called result
. It’s an implicit variable available in all procs that have a return value and represents the thing that will be returned.publish
proc we see the discard
statement, it’s used to “throw away” return values that we ignore, it has to be done explicitly in Nim or the compiler will complain.Then we build again:
nimble build
And let’s try running it aginst a public demo broker:
./bin/moni -s tcp://broker.hivemq.com:1883 sensor/99 '{"temp": 25.4, "flow": 0.7}'
could not load: libpaho-mqtt3c.so
compile with -d:nimDebugDlOpen for more information
Oops! Ok, so the MQTT wrapper library needs the C library of course! And it’s not available as a deb, so let’s get our hands dirty:
sudo apt install libssl-dev make
Then we can build and install Paho C from source:
1 2 3 4 5 6 |
|
Let’s try building again:
1 2 |
|
And finally we can hopefully publish via MQTT, let’s try it once more:
1
|
|
If it ends with “Payload sent” we are all good! We just sent a JSON payload to the cucumber/99 topic.
HiveMQ accepts anonymous connections on the test broker so we don’t need to specify username/password. In order to verify that the above actually worked, you can point your browser to http://www.hivemq.com/demos/websocket-client/ and connect on port 8000 to broker.hivemq.com, then add topic subscription to “cucumber/#” and run the above command once more. If all works you should see the message appear!
Now, to round things off we can install this little program too, locally for your user inside the LXC container that is. :) You just run nimble install
and then we have it in our path.
Ok, that’s all folks - you are now a Nim hacker!
]]>To spice it up, for no specific reason at all, we are doing it all inside a Linux Container - a fast virtual environment to work in. It’s just a nice way to have a clean environment and to ensure that you as a reader see the same results as I do.
You can of course just skip the part on LXC and go directly to Nim fun. :)
NOTE: The following presumes you are on a Ubuntu box, virtual should work fine.
Linux Containers let’s us run an isolated full Linux system inside a Linux host, kinda like KVM/Virtualbox but much more lightweight, similar to Docker. Contrary to Docker though, LXC is not constrained to a single process. Instead it behaves like a full VM which is much more what I want!
LXD is then a REST based daemon sitting on top of LXC that also gives us nice CLI tools operating against the daemon. See this nice blog article series on LXD version 2.0. Let’s install LXD and get us a clean spanking new Ubuntu 17.04 environment!
NOTE: More detailed steps are found here and a cheat sheet
sudo apt install lxd lxd-client zfsutils-linux
newgrp lxd
Then step through a bunch of questions, just using defaults work fine:
sudo lxd init
So that dance felt long, but… it was worth it!
Now we can fire up a fresh Ubuntu, say version 17.04, and call it nim
:
lxc launch ubuntu:17.04 nim
We can now see it’s running:
lxc list
And we can get a root shell inside it:
lxc exec nim -- bash
But better to login properly as the ubuntu
user:
lxc exec nim -- su --login ubuntu
Today the preferred way to install Nim on Linux is to use choosenim, a neat toolchain multiplexer which makes it easy to switch between different versions of the Nim compiler. First we install GCC though, needed by choosenim:
sudo apt install gcc
Then we can do the dance to install choosenim and nim:
curl https://nim-lang.org/choosenim/init.sh -sSf | sh
echo "export PATH=~/.nimble/bin:\$PATH" >> ~/.bashrc
export PATH=~/.nimble/bin:$PATH
And we should have the Nim compiler in our path:
ubuntu@nim:~$ nim --version
Nim Compiler Version 0.17.2 (2017-09-07) [Linux: amd64]
Copyright (c) 2006-2017 by Andreas Rumpf
git hash: 811fbdafd958443ddac98ad58c77245860b38620
active boot switches: -d:release
Allright! Time to make a small Nim program called “moni” - don’t ask why. First create a directory to work in, obviously we should use git etc, but I leave that to you. We also run nimble init
to get a skeleton of a so called .nimble
file. Nimble is the “npm” of the Nim ecosystem. And a nimble file is similar to packages.json
for npm.
mkdir moni && cd moni
nimble init
Now, let’s add some more lines to moni.nimble
, starting with these three in the top section:
binDir = "bin"
bin = @["moni"]
skipExt = @["nim"]
This tells nimble that this package produces binaries and will put them in the directory bin
when building. We also tell it that we have a list of binaries, the syntax for a seq
in Nim, which is a dynamic array, looks like @[ a, b, ... c ]
. So we add "moni"
to that list, the executable’s name.
Finally we also tell Nimble that when this package later is installed, skip installing all .nim
files, since we are not making a Nim library, we only want the compiled executable to be installed.
Let’s also add a dependency called docopt
which is a really nice Nim library for parsing command line arguments, to the bottom list of dependencies:
requires "docopt"
The full file should now look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Ok, and finally, let’s write some code. To begin with the program will just parse out arguments, and can show help, save this as moni.nim
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
Time to compile it!
If we want nimble to suck down dependencies automatically for us, then we build using nimble, it will use the moni.nimble
to figure out what to do:
nimble build
And we can then run the binary:
./bin/moni
If we supply a topic and payload we can see default values for options:
./bin/moni topic payload
We can also compile the moni.nim
file directly, simply using the nim compiler - but that would have failed initially since we didn’t have the docopt
dependency installed. But do try it now:
nim c moni.nim
The nim compiler will however put the binary in your current directory, not in bin
.
Ok, let’s get serious and add some real MQTT code into this. First add a dependency in moni.nimble
to the Nim wrapper of the PAHO MQTT C library, by adding the following line at the bottom of moni.nimble
:
requires "https://github.com/barnybug/nim-mqtt"
So with nimble we can require using direct URLs to git or mercurial repositories as well, we are not limited to the pulished known packages in the Nimble catalog. Then make the code look like this instead:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
|
A few quick remarks about the code:
$something
, that’s Nim’s way of saying something.toString()
&
that’s string concatenation.connect
proc there is a variable called result
. It’s an implicit variable available in all procs that have a return value and represents the thing that will be returned.publish
proc we see the discard
statement, it’s used to “throw away” return values that we ignore, it has to be done explicitly in Nim or the compiler will complain.Then we build again:
nimble build
And let’s try running it aginst a public demo broker:
./bin/moni -s tcp://broker.hivemq.com:1883 sensor/99 '{"temp": 25.4, "flow": 0.7}'
could not load: libpaho-mqtt3c.so
compile with -d:nimDebugDlOpen for more information
Oops! Ok, so the MQTT wrapper library needs the C library of course! And it’s not available as a deb, so let’s get our hands dirty:
sudo apt install libssl-dev make
Then we can build and install Paho C from source:
1 2 3 4 5 6 |
|
Let’s try building again:
1 2 |
|
And finally we can hopefully publish via MQTT, let’s try it once more:
1
|
|
If it ends with “Payload sent” we are all good! We just sent a JSON payload to the cucumber/99 topic.
HiveMQ accepts anonymous connections on the test broker so we don’t need to specify username/password. In order to verify that the above actually worked, you can point your browser to http://www.hivemq.com/demos/websocket-client/ and connect on port 8000 to broker.hivemq.com, then add topic subscription to “cucumber/#” and run the above command once more. If all works you should see the message appear!
Now, to round things off we can install this little program too, locally for your user inside the LXC container that is. :) You just run nimble install
and then we have it in our path.
Ok, that’s all folks - you are now a Nim hacker!
]]>Some background on the Spry implementation may be interesting. Spry is implemented in Nim as a direct AST interpreter, it’s not a JIT, in only about 2000 lines of code. It has a recursive classic “naive” design and uses a spaghetti stack of activation records, all allocated on the heap relying fully on Nim’s GC to do it’s work. It also relies on Nim’s method dynamic dispatch in the interpreter loop for dispatching on the different AST nodes. Blocks are true closures and control structures like timesRepeat:
are implemented as primitives, normally not cheating. Suffice to say, there are LOTS of things we can do to make Spry run faster!
The philosophy of implementation is to keep Spry very small and “shallow” which means we rely as much as possible on the shoulders of others. In this case, primarily Nim and it’s superb features, performance and standard library.
Enough jibbering, let’s do some silly damn lies - ehrm, I mean silly tests!
1 2 3 4 5 |
|
The above snippet runs in around 40 ms in latest Squeak 5.1. Nippy indeed! Ok, so in Spry then with a recently added primitive for select:
:
1 2 3 |
|
First of all, if this is the first Spry code you have seen, I hope you can tell it’s Smalltalk-ish and it’s even shorter. :) A few notes to make it clearer:
=
and equality uses ==
, no big reason, just aligning slightly with other languages.[]
is the syntax for creating a Block (at parse time), which is the workhorse dynamic array (and code block), like an OrderedCollection..
at the end, nor does it rely on line endings or indentation, same snippet can be written exactly the same on a single line.(10 random)
because Spry evaluates strictly from left to right.:foo
. So blocks are often shorter than in Smalltalk like [:x > 8]
or even [:a < :b]
.Spry runs this in 1000 ms, not that shabby, but of course about 25x slower than Squeak. However… I think I can double the Spry speed and if so, then we are in “can live with that-country”.
Just to prove Spry is just as dynamic and cool as Smalltalk (even more so actually in many parts), we can also implement select:
in Spry itself (and for the more savvy out there, yes, detect:
can also be implenented using the same non local return trick as Smalltalk uses):
1 2 3 4 5 6 7 |
|
Without explaining that code, how fast is the same test using this variant implemented in Spry itself? 8.8 seconds, not horrible, but… I think we prefer the primitive :)
Now… let’s pretend this particular case is an important bottleneck in our 20 million dollar project. We just need to be faster! The Spry strategy is then to drop down to Nim and make a primitive that does everything in Nim. Such a 7-line primitive could look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Then it runs in 10 ms!
Yup, it’s cheating, but the 20 million dollar project wouldn’t care… The thing to realize here is that its MUCH easier to cheat in Spry than it is in Squeak/Pharo. But… yes, you would need to know how to make a primitive, and as a primitive it’s compiled code so you can’t mess with it live, and it also presumes that each node is an IntVal. However, Spry (when I fix error handling) should gracefully handle if it isn’t an IntVal, that will trigger a Nim exception that the Spry interpreter should catch.
If you have made primitives in Squeak/Pharo you know it’s much more complicated. You need to take great care with allocation since the GC can move things under your feet. You must convert things to C and so on, and building the stuff is messy. Spry on the other hand shares the underlying data structures with Nim. In other words, Spry nodes are Nim objects. It’s trivial to work with them, allocate new ones like newBlok()
above creates a new block and so on. This is a huge deal! Recently when I started integrating libui with Spry (a pretty slick movie) I got callbacks from libui back into Spry working in like… 30 minutes of thinking. That’s HUGE! Doing callbacks from C or C++ back into Squeak has been a really messy and complicated thing for YEARS. Not sure if it’s any better.
Also, going pure Nim would be much faster still since it would use a seq[int]
and not a seq[Node]
(boxed ints) - a vast difference. So if we really wanted to work with large blocks of integers, a special such node type could easily be made that exposes primitives for it. Kinda like the FloatArray thing in Squeak, etc.
Let’s look at another example where Spry actually beats Squeak. And by beat I mean really beat, by factor 4x! The test is to use findString:startingAt:
in a fairly large string, to find a match, 2 million times.
1 2 3 4 5 6 |
|
This snippet runs in 135 seconds in Squeak 5.1. The corresponding Spry code is:
1 2 3 4 5 |
|
Again, note how Smalltalkish the code looks - and … you know, come on Smalltalk… reading a file? It shouldn’t need to be (StandardFileStream oldFileNamed: 'string.txt') contentsOfEntireFile
for such a common and mundane task!
You gotta admit, readFile "string.txt"
is nicer. But hey, says the careful reader, what the heck is that? Yes, Spry supports “prefix functions” that take arguments from the right, Rebol style. It isn’t used much in Spry code, but for some things it really reads better. For example, in Spry we do echo "hey"
instead of Transcript show: 'hey'
. That’s another thing that is overly verbose in Smalltalk and should IMHO be fixed, at least just to save poor newbies their fingers. Anyway (end of rant)….
…Spry runs that in 33 seconds! And just to get a sense for how large the primitive is in Spry, it’s exactly 5 lines of code:
1 2 3 4 5 |
|
It’s quite easy to follow. We just pull in arguments and unbox them into Nim string, int, int - and then we call Nim’s find and we finish by using newValue()
to box the answer as a Spry IntVal again. This shows how easily - no… trivially we can map Spry behaviors to Nim library code which runs at the speed of C/C++.
Given all this, it would still be nice to improve Spry to come say … within 10x of Cog for general code, perhaps in this case shave it down from 1000 ms to around 300 ms. The things that I do know I should do to improve speed in general are the following:
I hope this got you interested in Spry!
]]>But the last few years, finally, I have started to feel the “burn”… as in “Let’s burn our disk packs!”. And last year I started doing something about it - and the result is Spry. Spry is only at version 0.break-your-hd and several key parts are still missing, but its getting interesting already.
Now… is Spry a Smalltalk? And what would that even mean?
I think the reason I am writing this article is because I am feeling a slight frustration that not more people in the Smalltalk community find Spry interesting. :)
And sure, who am I to think Spry is anything remotely interesting… but I would have loved more interest. It may of course change when Spry starts being useful… or perhaps the lack of interest is because it’s not “a Smalltalk”?
The Smalltalk family of languages has a fair bit of variation, for example Self is clearly in this family, although it doesn’t even have classes, but it maintains a similar “feel” and shares several Smalltalk “values”. There have been a lot of Smalltalks over the years, even at PARC they made different variants before releasing Smalltalk-80.
So… if we look at Spry, can it be considered a member of the Smalltalk family?
There is an ANSI standard of Smalltalk - but not many people care about it, except for some vendors perhaps. I should note however that Seaside apparently (I think) has brought around a certain focus on the ANSI standard since every Smalltalk implementation on earth wants to be able to run Seaside and Seaside tries to enforce relying on the ANSI standard (correct me if I am wrong).
Most Smalltalk implementations share a range of characteristics, and a lot of them also follow the ANSI standard, but they can still differ on pretty major points.
My personal take on things in Smalltalk that are pretty darn important and/or unique are:
Not all Smalltalks cover all 10. For example, there are several Smalltalks without the image model and without a browser based IDE. Self and Slate and other prototypical derivatives don’t have classes. Some Smalltalks have much less evolved class libraries for sure, and some are more shallow in the “turtle department”.
In Spry we are deviating on a range of these points, but we are also definitely matching some of them!
So Spry scores 5/10. Not that shabby! And I am aiming for 3 more (#3, #5, #10) getting us up to 8/10. The two bullets that I can’t really promise are #1 and #7, but I hope the alternative approach in Spry for these two bullets still reaches similar effects.
Let’s look at #1, #2 and #6 in more detail. The other bullets can also be discussed, but … not in this article :)
In Smalltalk everything is an object, there are no “fundamental datatypes”. Every little thing is an instance of a class which makes the language clean and powerful. There are typically some things that the VM treats differently under the hood, like SmallInteger and BlockClosure etc, but the illusion is quite strong.
Spry on the other hand was born initially as a “Rebol incarnation” and evolved towards Smalltalk given my personal inclination. Rebol as well as Spry is homoiconic and when I started building Spry it felt very natural to simple let the AST be the fundamental “data is code and code is data” representation. This led to the atomic building block in Spry being the AST Node. So everything is an AST node (referred to as simply “node” hence on), but there are different kinds of nodes especially for various fundamental datatypes like string, int and float and they are explicitly implemented in the VM as “boxed” Nim types.
In Smalltalk objects imply that we can refer to them and pass them around, they have a life cycle and are garbage collected, they have an identity and they are instanciated from classes which describes what messages I can send to them.
In Spry the same things apply for nodes, except that they are not instanciated from classes. Instead nodes are either created by the parser through explicit syntax in the parse phase, or they are created during evaluation by cloning already existing ones.
An interesting aspect of Spry’s approach is that we can easily create new kinds of nodes as extensions to the Spry VM. And these nodes can fall back on types in the Nim language that the VM is implemented in. This means we trivally can reuse the math libraries, string libraries and so on already available in Nim! In essence - the Spry VM and the Spry language is much more integrated with each other and since the VM is written in Nim, Nim and Spry live in symbiosis.
Using Spry it should be fully normal and easy to extend and compile your own Spry VM instead of having to use a downloaded binary VM or learning Black Magic in order to make a plugin to it, as it may feel in the Squeak/Pharo world.
Finally, just as with Smalltalk the meta level is represented and manipulated using the same abstractions as the language offers.
Conlusion? Spry is different but reaches something very similar in practice.
But what kind of behaviors are associated with a particular node then? In Spry I am experimenting with a model where all nodes can be tagged and these tags are the basis for polymorphism and dynamic function lookup. You can also avoid tagging and simply write regular functions and call them purely by name, making sure you feed them with the right kind of nodes as arguments, then we have a pure functional model with no dynamic dispatch being performed.
In Spry we have specific node types for the fundamental datatypes int, float, string and a few other things. But for “normal” objects that have instance variables we “model objects as Maps”. JavaScript is similar, it has two fundamental composition types - the “array” and the “object” which works like a Map. In Spry we also have these two basic structures but we call them Block and Map. This means we can model an object using a Map, we don’t declare instance variables - we just add them dynamically by name to the map.
But just being a Map doesn’t make an object - because it doesn’t have any behaviors associated with it! In Smalltalk objects know their class which is the basis for behavior dispatch and in Spry I am experimenting with opening up that attribute for more direct manipulation, a concept I call tags:
The net effect of this is that we end up with a very flexible model of dispatch. This style of overloading is a tad similar to structural pattern matching in Erlang/Elixir.
One can easily mimic a class by associating a bunch of functions with a specific tag. The tags on a node have an ordering, this means we also get the inheritance effect where we can inherit a bunch of functions (by adding a tag for them) and then override a subset using another tag - by putting that tag first in the tag collection of the node. Granted this is all experimental and we will see how it plays out. It does however have a few interesting advantages over class based models:
I am just starting to explore how this works, so the jury is still out.
Spry supports infix and prefix functions and additionally keyword syntax using a simple parsing transformation. The following variants are available:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
This means Spry supports the classic Smalltalk messge syntax (unary, binary, keyword) in addition to prefix syntax which sometimes is quite natural, like for echo
. Currently there is no syntactic support for cascades, but I am not ruling out the ability to introduce something like it down the road.
Spry is very different from Smalltalk and I wouldn’t call it “a Smalltalk”, but rather “Smalltalk-ish”. I hope Spry can open up new exciting programming patterns and abilities we haven’t seen yet in Smalltalk country.
Hope you like it!
]]>At the moment he is rewriting the parser and code generator parts in the language itself, following a similar bootstrapping style as Ian Piumarta’s idst. For example, here is the method parsing keyword messages.
At the moment Fowltalk is nowhere near usefulness but its fun stuff!
It’s interesting to look at these bootstrap*
files - we can immediately notice some syntactic differences to Smalltalk-80:
Block arguments are written like [| :x :y | ... ]
and you can mix both locals and params there: [| :aParam aLocalHasNoColon | ... ]
. Instinctively I can agree with the combination, but I would probably then make the first |
optional.
Some messages have been changed, like ifTrue:ifFalse:
is instead ifTrue:else:
. I have done similar simplifications in Spry. And just like in Spry ivars are referenced using @myIvar
.
There isn’t any documentation on Fowltalk yet, but it’s clearly a rather elaborate implementation. It compiles to bytecodes, uses numbered primitives (I think) and there is an image mechanism.
It was also quite easy to get the REPL up and running, but just as with Spry, it’s hard to know how to use it! On Ubuntu I installed boost sudo apt-get install libboost1.58-dev
and then it was easy to get it running following the instructions, as long as you change setup-linenoise.sh
to setup_linenoise.sh
.
The image constructed by the bootstrap process is 67Mb in size. Then we can do the canonical Smalltalk test in the REPL:
1 2 3 4 5 6 |
|
Fowl mentioned that the new parser can be loaded using !read bootstrap.1
but… at the moment that causes errors.
It will be interesting to see where this goes! Fowltalk is very early in its evolution, and it’s not a JIT, but it’s a real bytecode VM with an image and we can never have enough Smalltalk-like languages! :)
]]>In this article I do some silly experiments around interpreter startup time and fooling around with 40 million element arrays. As usual, I am fully aware that the languages (Pharo Smalltalk, NodeJS, Python) I compare with a) have lots of other ways to do things b) may not have been used exactly as someone else would have done it. A truck load of salt required. Now… let’s go!
Spry is pretty fast starting up which obviously has to do with Spry not doing much at all when starting :)
So a trivial hello world being run using hashbang, executed 1000 times from another bash script, takes substantially less time than the same in Python. Useful benchmark? Not really, but obviously we can do scripting with Spry and at least not paying much for startup times! Here are the two trivial scripts and the bash script running them 1000 times:
1 2 |
|
1 2 |
|
1 2 3 4 5 6 |
|
If we run the above, first for hello.sy
and then hello.py
, as reported by time
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Hum! So a trivial Spry script is 3-10x quicker depending on what you count (real clock vs cpu time etc), and… no, it’s not output to stdout that is the issue, even a “silent” program that just concatenates “hello” with “world” suffers similarly in Python.
We can of course also compile this into a binary by embedding the Spry source code in a Nim program - it’s actually trival to do. The 5th line below could of course be a full script. Since the Spry interpreter is modular we can pick some base modules to include, in this case the IO module is needed for echo
to work so we add it to the interpreter on line 3:
1 2 3 4 5 6 |
|
..and then we build a binary using nim c -d:release hello.nim
and if we run that instead from the same bash loop we get:
1 2 3 |
|
Of course Python can do lots of similar tricks, so I am not making any claims! But still very neat. And oh, we didn’t even try comparing to Pharo here :) Startup times is definitely not a strength of Smalltalk systems in general, typically due to lack of minimal images etc.
I wanted to create some fat collection and do some loops over it. Spry has a universal ordered collection called a Block
. Smalltalk has it’s workhorse OrderedCollection
. Nodejs has an Array
. Let’s stuff one with 40 million integers and then sum them up!
NOTE: The first numbers published were a bit off and I also realized an issue with Cog and LargeIntegers so this article is adjusted.
Pharo 4 with the Cog VM:
NodeJS 4.4.1:
Python 2.7.10:
Spry:
Spry with activation record reuse:
Ehum…
NOTES
If we spend some time profiling Spry we can quickly conclude that the main bottleneck is the lack of a binding phase in Spry - or in other words - every time we run a block, we lookup all words! Unless I am reading the profile wrong I think the endless lookups make up almost half the execution time. So that needs fixing. And I also will move to a stackless interpreter down the line, and that should give us a bit more.
And what about Python’s sum
function that did it in whopping 0.3 seconds? Yep, definitely the way to go with an optimized primitive function for this, which brings me to…
The secret weapon of Spry!
One core idea of Spry is to make a Smalltalk-ish language with its inner machinery implemented in Nim using Nim data types. So the collection work horse in Spry, the block
is just a Nim seq
under the hood. This is very important.
Combined with a very simple way of making Nim primitives we can quickly cobble up a 6 line primitive word called sum
that will sum up the ints in a block. We simply use the fact that we know the block consists only of integers. I am guessing the sum
function of Python does something similar.
Here is the code heavily commented:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
It’s worth noting that almost all primitive words in Spry are written using this same pattern - so there are lots of examples to look at! Of course this is a bit of “cheating” but it’s also interesting to see how easy it is for us to drop down to Nim in Spry. We create a new word bound to a primitive function in exactly 6 lines of code.
So how fast is Spry using this primitive word? It sums up in blazing 0.15 seconds, about 100x faster than Cog and 10x faster than NodeJS for summing up. And yeah, even 2x faster than Python!
And yes, we can easily make this primitive smarter to handle blocks with a mix of ints and floats and a proper exception if there is something else in there - then it ends up being 17 lines of code, and still almost as fast, 0.17-0.18 seconds! I love you Nim.
In summary, Cog - which is what I am most interested in comparing with - is fast but my personal goal is to get Spry within 5x slower in general speed - and that will be a good number for a pure interpreter vs an advanced JIT. And if we throw in primitive words - which is not hard - Spry can be very fast!
]]>That’s obviously hard to do, but I am trying a little bit by questioning every little thing that many consider not being even relevant or possible to question. Some examples are:
Everyone is so busy “doing stuff” that noone takes the time to actually reflect. Can we really not create a development system in which I can see exactly what is going on? Is there really no more powerful ways to do debugging?
So I may not look into the future, but most good ideas come from someone doing something unexpected, weird, impossible or downright stupid. In Spry I want us to try a few of those :)
I don’t really dare, but I think it’s safe to say that Virtual Reality is probably going to be accessible everywhere. JavaScript has hopefully waned but leaving behind a new much lower threshold to programming being the norm, not the exception. Everyone wants to be able to program. Hardware is basically free, very capable and everywhere. People tend to think that the web is taking over everything, but I don’t think its that simple - I think diversity is going to be much higher due to new companies creating new kinds of devices. Many more devices.
How does this affect choices in Spry? Well, I tend to not let performance considerations hinder various ideas. I also focus pretty hard on mobility of code and data, since I think we should be able to find a lot more models of computing in the area of distributed systems.
Finally I do think DSLs in different shapes or forms will play a big part in the future - so Spry should have excellent capabilities for that.
I also want Spry to be modular on most levels, while still being fairly simple.
The Smalltalk team was focused on user interfaces, education and children. With Spry “people” means primarily “developers”.
I don’t think 20 years will remove the need for writing code, but the pressure for fast results will be immensely higher. I also think the boundaries of computing will be much fuzzier and that we will need to have more advanced tools to create and mold code into doing what we want. Things will run on many devices, distributed in novel ways reaching places in our lives we can not really imagine.
I want to create and modify systems live as they run, as they are being used. Not just run locally, or as prototypes, but as they run live in deployment. Continuous deployment will probably evolve into 100% live online development. How will that affect developers? What tools do we need? How can we evolve a live system with confidence?
This implies we will have to create much more powerful ways to create, debug and modify code. We need to raise the abstraction levels, but perhaps a key to that is to create a homoiconic language that lends itself to introspection and self reference. Smalltalk didn’t do that (only to some extent), nor did JavaScript. The Lisp family of languages did to some extent, but for various reasons never really took off. Hard to say why.
One vision is a globally shared live system of cooperating Spry objects. Like GemStone/S but on a global scale, and taken even further to the extreme. Today developers share code - dead code - via various package catalogs and copy/paste forums. The SaaS and PaaS etc are trying to create shared platforms, but it’s still very much centered around the same coding model where we don’t share actual functionality, but merely code and libraries to recreate the functionality on our own.
To be concrete - instead of downloading a library and create a small service that consumes a live feed of data and produces a stream of Spry objects, in Spry we would find not a library, but a live running existing service that we just hook into. The module is not dead code, but actually a live and running service.
This is homoiconicity driven all the way! During the years sharing of objects have been tried via various RPC-ish standards like CORBA or RMI, but those standards have always revolved around static early binding and separate specifications and have thus later been completely run over by late binding self describing technologies like REST-ful APIs using JSON and similar “soft” formats. Late binding and self description is key for how modern development is done to a large extent - experimentation.
Another vision is Spry being a language to serve as a new foundation for transferable portable active code. Kinda like a JavaScript that doesn’t suck and that is homoiconic and thus easy to make tools for.
But in the end… I don’t have any grand delusions about Spry - its all for fun and I just hope some of us will find it useful!
Obviously I don’t have this. Yet. I hope that if I can make enough progress on my own - then people will join. And I stand on firm shoulders in the form of Nim which makes Spry suffering less of NiH (no pun intended). I hope that some Smalltalkers will eventually join, but I need a good solid language manual and perhaps even a reasonably interesting IDE to get any real traction.
Spry has almost reached the point where we can start working on the fun stuff. The module system, serialization mechanisms and lossless AST improvements are all crucial steps towards this. Next step is getting the OO model working and do the Sophia integration to get a working image based system. After that I suspect its time to make a first IDE. I do have some plans and ideas for that too :)
I think this is definitely important. The existing REPL is just a crude first trivial step. But it will get better!
I personally want to apply Spry in the domains of VR and IoT. Web systems is no longer that interesting to me, but if someone would like to evolve Spry compiled to js - I would be very grateful.
Hopefully, eventually! :)
]]>When I built SqueakMap waaay back I was already then tainted with the idea of shared object models and one of the primary ideas in SqueakMap was to make sure each local Smalltalk environment got a full live object model of the catalog which then could be queried, viewed and reasoned about inside the Smalltalk environment. Much more powerful than a bunch of JSON files on disk. This led to the approach of downloading the full catalog in a serialized form - and then loading it into Squeak.
With Spry I want us to create a simpler meta model - at least for starters - but with an even smarter infrastructure backing it…
A quick summary of where the Spry modules implementation is today:
meta
with members like name
and version
. First I was thinking of using _meta
but I am opting instead for plain meta
. Collisions? Deal with it.Foo::bar
or ^Foo::bar
.modules
referencing the Modules that should be consulted in order for lookups of non qualified words.Modules can be trivially serialized or deserialized in source form, just like any other Spry node. This is how we serialize any node in Spry, remember that data and code is the same thing, its all turtles… I mean nodes. Thus, the source code, or file format, of a Map (and thus also a Module) looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
If we store this as Foo.sy
(.sy
being the Spry file extension I use) we can trivially load it into Spry with loadFile: "Foo.sy"
. This is an example of a prefix keyword func, it could just as well have been named loadFile
, but I am experimenting with finding good Spry conventions around this, but that’s the subject of another article :)
I have realized two isses so far:
Foo::bar
as implemented at the moment looks directly in globals for Foo
.The first issue, on my fourth thought, I decided it’s powerful to have Foo::bar
be implemented as equivalent to Foo at: 'bar
. So I will make sure to look for Foo
first using normal scoping lookup. This enables shadowing of modules but it should be a Big No No because it turns into a kind of import statement and as a reader of code you wouldn’t be sure what Foo::bar
really resolves to. But Spry can easily detect if you introduce a Module shadow, and hit you hard on the head!
The second issue is more intricate and caused me to think quite hard on which route to take. If we wrap the Map inside a block, we get a closure, and then we could create private bindings in that closure. That resembles the techniques used in the JavaScript community, so definitely not an odd concept. But it also leads to the module not being serializable as itself. The Map is no longer the module itself, instead it only holds the “exports” of the Module.
I want to stick to a declarative Map style and introduce hooks like Foo::init
that is called upon Module load and Foo::release
perhaps on Module unload. But how should private state of the module be created? Let that simmer while we dive into another aspect…
It would be pretty nice if we could unify storage of Modules (Spry nodes in general) so that we could simply store: Foo asFile: "Foo.sy"
. Today we can do that, but all indentation and comments are lost in the round trip! So… it would be super slick if we could once and for all get rid of source code :). Smalltalk never went all the way on this - although Smalltalk came quite close. Various ideas around this:
I am leaning towards the latter, even though it’s obviously insane.
So… if I extend Node with an optional string containing the “all whitespace and comments” right before the Node itself - then we should be able to serialize/deserialize without losses, except for anything coming after the very last node :). Default whitespace is a single space, we represent that as nil
. And sure, wasting a full reference in every Node? I agree, completely nuts, but perhaps we can somehow magically avoid that later on. It still is too tempting to try!
Fiddling with closures for modules, as is done in JavaScript, feels hacky. First of all, I don’t like modules that are primarily constructed by running code. It’s too brittle. I want to have a loading phase that is declarative-ish, and then an activation phase where the module can execute code in specific hooks. This means that Spry can analyze the module when its loaded to check for collisions or other things. The simplest way of loading modules is to simply eval the Curly, to get a Map - but we could trivially create a “safe Module loader” that doesn’t use plain eval
and thus we would plug that security hole. For now, eval
is fine though!
This leaves us with the question on how to create private state in the Module.
In earlier articles I introduced the concept of scoped words, .x
and ..x
, but haven’t followed through on actually implementing them. The .x
could mean “start resolving in closest enclosing Module or Object”. This would make it work like instance variable access - and a Module will most obviously turn into an Object when I get the OOP stuff in place. Now, to make x
be private I am thinking of _x
. I stared at the ASCII table and didn’t think the alternatives were good. And most developers use _
to denote privateness. This means I have decided to go with:
.x
means in the closest enclosing Object. To begin with the closest enclosing Map is good enough...x
means somewhere outside of the closest enclosing Map. The definition if that we can experiment with later._x
means just like .x but privateFoo::bar
to resolve Foo normally first. This enables ::
to be used for “property access” in general..x
and _x
to behave as described above.Happy Sprying!
]]>Smalltalk has a Dictionary holding all the globals forming “the roots” of the object memory. In Smalltalk this Dictionary is also itself a global variable accessible as Smalltalk
, in other words Smalltalk == (Smalltalk at: #Smalltalk)
. The primary use of Smalltalk
is to hold all classes by name, so they are all reachable as globals. Obviously Smalltalk
can also hold any kind of object (not just classes) as a global.
Spry also has such a top level Dictionary, but in Spry we call a Dictionary a Map
to be a little bit more aligned in terminology with other languages (and it’s shorter). This top level Map is the root
Map and it is accessible via the word root
. In Spry the root
word is actually bound to a primitive function returning this Map
, so in Spry we also have root == (root at: 'root)
.
Ok, so Spry has a Map
of globals and one way of using Spry is simply by populating root
with words bound to functions making these functions globally accessible, it’s how I have done it so far. Yeah, yeah, I know, but for smaller systems it probably works just fine!
But…
But we do want a Module concept and given that I once designed a Namespace model for Squeak (that never was accepted) - it was inevitable I guess that it would reappear in Spry! :)
As many other languages do, I also simplify by making a “Module” do double duty as a “Namespace”. It’s a reasonable approximation although to be precise a Module is normally a deployment, versioning and distribution unit and a Namespace should ideally be aligned with how we humans are organised, but… whatever. In Spry I also simplify by not allowing nesting of Modules. A Module is simply a Map
bound to a global word.
Modules need to have meta information about them. In Nim we use a Foo.nimble
file to contain this meta information. In the JavaScript world there is a package.json
file containing meta information. In Spry, since a Module is a Map
, we let the _meta
key hold a Map of meta information:
1 2 3 4 5 |
|
The name of the Module is thus only kept as meta information, this means that the code loading the module into our system decides what the Module should actually be named - thus we can choose to load a Module Foo
by the name Foo2
if we already have a Module called Foo
in the system. It could for example be used to have two different versions of the same Module loaded at the same time.
So how do we refer to things in different Modules? Obviously we can do it manually using (root at: 'Network) at: 'Socket
, it’s just a global Map
after all, but Network at: 'Socket
is simpler. I am also introducing yet another word type - a Module qualified word. It would look like Network::Socket
and be evaluated as Network at: 'Socket
. If we load another Foo
as Foo2
, then all existing references like Foo::Cat
will of course not refer to the new Foo2
, but we could easily scan for them and modify them, if we so wish.
Finally, we face the issue of imports. Almost all programming languages use imports, often per class or file, but also per module. It’s worth recognizing what true purpose they actually serve.
One use of them is to avoid typing long names in the actual code, but … that would typically only be an issue if module names where.. say very long like com.MyCoolCompany.ProjectX.Base.Common
, but they aren’t in Spry, since we don’t allow nesting nor do we want people to use Internet domain names like that.
It can be used to constrain the allowed references a Module can have, but… in my experience it’s not often used to do that. One could however imagine a system of declarative rules of what modules can access what other module, or which group of modules can depend on which other group. In fact, I designed such a tool for Java back in … waaay back.
To enhance completion, only completing within the imported union of modules. I don’t really view this as a critical thing, and it can also be solved using heuristics. Smalltalk systems also complete these days, and not having imports hasn’t really made it less useful.
To act as documentation for a Module showing what other Modules it uses, but… then we should not allow fully qualified references in the code since that invalidates this purpose. And we could trivially scan and find all usages within the Module without the import statements.
In my proposal for Squeak there were no imports either, the idea was to always have the full reference in the source code, but to let browsers “render them short” if the unqualified name still was unique in the image. In Spry I am opting for a slightly different approach:
root
for globals. If it fails, it looks through the Modules one by one until it finds a hit. This means Socket
will resolve to Network::Socket
if Network
is the first module found containing that word, and there is no global Socket
shadowing it.Network::Socket
, lookup is directly in the module by that name, we never look at globals. If there is no hit, it’s not resolved, so no need to look elsewhere.This means we can still use Socket
in short form, but be aware that it means “Give me the first thing you find called Socket”. If we qualify it means “Give me Socket in the Network module”.
Thus, if we let root at: 'modules
be a Block of the modules that wish to participate in such an ordered lookup - that should be enough!
So I will:
ispry
(the REPL) to use them.When this works it’s getting time to hook into Sophia!
Happy Sprying!
]]>As a Smalltalker I dream “bigger” than just managing source code as text in files…
Smalltalk uses the “image model” in which the system is alive and running all the time, the full development environment is also live together with your application, and we are in fact modifying object structures when we develop Smalltalk programs. We can also snapshot that object memory onto disk and fire it up somewhere else. Several Lisp implementations have used a similar approach I think.
The image model has tons of really cool benefits, I don’t have time repeating all of them, but a modern implementation of the idea should take a few things into account that was not considered in the 1970s:
Some argue that the image model has downsides - like being an “ivory tower” incapable of interacting with the outside world. The Smalltalk environments have indeed historically suffered a bit in varying degree, but we can easily find ways around those issues while still reaping the awesomeness of a fully live programming environment, especially if we give the above items proper thought from the start.
With Spry I think I have a beginning to a novel approach… as well as taking the above into account.
Most Smalltalks (not all) have been image based and the image has simply been a “memory snapshot” of the whole system down to a single disk file. Although quite novel both then and now - the concept of a single dump onto a file is rather primitive. Today we have TONS of different advanced database engines to choose from - why not use one of them instead?
Yes, GemStone is a remarkable exception to the traditional Smalltalk image model. Already in the late 1980s they realized that the object memory could be made transparently distributed, multiuser, transactional and persistent. GemStone is simply DARN cool and I haven’t seen anything even close in other languages. But it’s not open source, and it’s expensive. And… can be a bit complex too.
Can we do something similar to GemStone but much simpler? Let’s start in the single user perspective. Assume we have a transparently integrated advanced and super fast database engine. Sure, single user, but that will give us a strong platform to stand on for code management, IDE development and a lot more.
But hold on - first we need to decide on a suitable format to store in a database.
I thought briefly about the binary path where we simply store pages of RAM onto disk in order to avoid serialization/deserialization - but I opted out. It’s complicated and it doesn’t fit well into the idea of having future multiple implementations of the Spray VM catering to different eco systems. I also think CPUs are insanely fast these days so serialization/deserialization is not a bottleneck.
So let’s presume we serialize, in what format? Definitely in a readable format I would say. What about JSON then? Mmm, JSON is simple and TONS of databases these days rely on it, and in fact Spry will have really nice abilities to manipulate and integrate JSON - but we have a more natural choice in Spry.
Spry is homoiconic and has a simple and easily parsed free form syntax of its own. In the name of unifying concepts and simplicity - we obviously just use Spry! This would be slightly in analogy with the storeOn:
and readFrom:
mechanisms that Smalltalk had from the start storing data “as code”. But Spry is MUCH cleaner and consistent here, in the way Lisp is. The data model of Spry, including the model of executable code, is the AST tree and the syntax of Spry mirrors this tree.
After some optimizations in the current parser I made some tests on my laptop. I daftly generated some fake “data” in the Spry syntax and can happily note that Spry compiles (deserializes) about 10Mb source per second into AST nodes. And it seems to scale pretty fine too. The serialization step is even faster.
To put some icing on that cake I threw in LZ4 compression which is wickedly fast, so fast it isn’t even noticed even when doing a full back to back cycle of a 430 Mb source file. And source is very amenable to compression, although my sample is repetitive so not a good reference.
Here is a trivial Spry script that reads a compressed file, uncompresses it, compiles it (deserialize) and then serializes it again and compresses it before writing it back on disk:
1 2 |
|
These commands I defined in prefix fashion which means the evaluation ends up as a chain from right to left. One could just as easily have defined them as infix to get a reverse evaluation, or one could use an Elixir style “pipe” function to get that effect. Spry is flexible here and obviously some kind of editor support or conventions may be needed to avoid confusion.
So where are we now? We can read and write files (trivial of course), compress/uncompress strings using liblz4
and serialize/deserialize strings into and from Spry AST nodes. This means we can now store and load code as well as data, and we could trivially extend the REPL with a regular “image model”.
But let’s go all the way. Files are neat, but a really good database is better. I hunted high and low for something that seemed easy to use, with an interesting set of features and really, really good performance. I ended up choosing Sophia and I have a Nim wrapper of the C API of Sophia already cooking. I have a feeling this is going to be a blast, terabyte image size? No problem.
When the wrapper is working it’s time to start thinking of how the memory model can be partitioned, manipulated through transaction boundaries, but that will be another article for another day!
If you want to chat about Spry, join up at http://gitter.im/gokr/spry! Or at #spry on freenode.
]]>But… really, the name “Ni” sucks. It’s hard to say in general and since it’s often mentioned together with Nim the confusion is obvious. And it also turned out to be less than optimal to google for.
So from now on the language is called Spry and there are two domains registered for it pointing to the same place - sprylang.org and spry-lang.net. The main site is sprylang.org, I first registered the .net domain but later realized that sprylang is just as reasonable as spry-lang, and there the .org was not taken.
Well, to be honest I was just surfing for synonyms to various “nice words” and I stumbled over it. I was leaning towards “fleck” but a native english speaking friend talked me out of it. :) Spry jumped up from some thesarus site and… it just feels nice. And the meaning of spry is also very fitting IMHO:
Spry: full of life and energy
One of the primary goals of Spry is to be a fully 100% live programming language and environment.
Spry: Moving or performing quickly, lightly, and easily
…and yeah, the agility and nimbleness is also a big part. Given the influence from “older” languages like Smalltalk, Rebol, Lisp and Forth - I can also see some sense in the typical use of the adjective spry - namely as pertaining to someone elderly! That’s fine with me.
And hey, it’s triple creamed too…
]]>I am of course partial here - but the tool is really easy to use, doesn’t force you into any specific framework or editor, enables a very quick development cycle and has tons of examples, tutorials, docs and some special IoT focused libraries like for example around BLE (Bluetooth Low Energy). You can have your code running on your phone literally within 1-2 minutes and you don’t need to install XCode or the Android tools, you can even run it just fine from your trusty Linux laptop! Just go for it. Ok, enough of the sales talk…
Since we are specializing in the IoT space we have our office filled to the brink with toys… eh, I mean IoT devices of all kinds from all different vendors. Two IoT communication standards are particularly important in this space, and that’s BLE and MQTT. I have already written three blog posts around MQTT using Evothings. Now I am instead focusing on BLE and particularly the embedded device side of the story.
This led me to round up a bunch of devices at the office that are fairly technically capable and have BLE support. The one I selected was the LinkIt ONE development board from MediaTek & Seeed Studio. It’s an insanely feature packed little board (GSM/GPRS, GPS, Wifi, BLE, sound output, SD card) with decent computing power (ARM7 EJ-S, 16Mb flash, 4Mb RAM) while still remaining in the “medium” embedded space I would say, still ruling out plain Linux and regular tools. I consider the Raspberri Pi or C.H.I.P and similar machines to be in the “large” embedded space, they are real computers and you can basically use whatever you like to develop on those.
The medium and small devices can be programmed using for example Espruino or Micropython (two very interesting projects) but in many cases, for more demanding applications, C/C++ is still king simply because of size and performance advantages. And also the fact that hardware vendor SDKs are typically in C/C++. But could there be an alternative language out there?
Yep! Read on to find out…
I have written extensively about Nim before, and there is even a blog article showing that Nim can even run on an Arduino UNO! To Nimmers this isn’t surprising, Nim compiles to performant C and if you turn off the GC etc, Nim can fit wherever C can fit.
But that article used the low level mechanics of interfacing with C/C++ and did not use c2nim, the excellent wrapper generation tool written in Nim by Andreas Rumpf, the primary author of Nim.
What if we could use c2nim to wrap most of the Arduino and MediaTek C/C++ libraries used to access all the features of the LinkIt ONE?
Two weeks ago I pushed Ardunimo to github. Cheesy name, but I couldn’t resist. Beware though, probably only me who have tried running this since you would need a LinkIt ONE!
The README at github shows how to get going and all this is ONLY tested on 64 bit Ubuntu 14.04. But the top dir contains a VagrantFile that I also threw together that will fire up a Ubuntu 14.04 and install and build all the Nim parts (Nim, nimble, c2nim) you need.
The fun part is that you don’t need anything else! The LinkIt ONE officially only supports development from Windows or OSX using the Arduino IDE, but I found some instructables showing that you can indeed use the Arduino IDE on Linux too. I used that and looking at what the Arduino IDE does, to craft the Makefile to gain full control over the build process and get rid of the Arduino IDE.
So if you are like me, comfy from the command line, this setup should be perfect!
Ardunimo consists of two Makefiles, one to create the wrapper and one to build the binary for the LinkIt ONE. A Nim wrapper consists of Nim modules, basically one .nim file for each header file of the C/C++ API we are wrapping. The top level Makefile will call the Makefile in the wrapper
directory if needed, to generate the wrapper modules, but we can also do it manually in the wrapper directory using say make clean && make
. Then we get all the nim wrapper modules recreated:
1 2 3 4 5 6 7 8 |
|
The Makefile uses the c2nim tool to generate these wrappers by parsing the header files from the SDK. These headers are in the src
directory, and there are some local small modifications to them. Some details were harder to fix in the header files, so those files I ended up hand editing after the c2nim generation, and placing them into the fixed
directory, so they are just copied from there. A bit more work to maintain of course.
The c2nim tool can generate wrappers following Nim naming conventions (CamelCase etc) but it caused some issues for me so I skipped that part.
The top level Makefile builds our binary that we can copy onto the LinkIt ONE. It does this by first compiling the Nim source code, for example blink.nim
, into .cpp source code, which then is compiled using the same commands that the Arduino IDE uses to compile. Normally the Nim compiler calls GCC (or whichever C compiler) by itself, but here I perform the C++ compilation and linking as usual in the Makefile in order to gain a bit easier control over the compilation and link commands.
Btw, this Makefile will also download the SDK and the ARM GCC to local directories, if they are missing.
The Arduino style is to have an include at the top, and then to define the setup()
and the loop()
functions that the Arduino framework then will call. This is how blink.nim
looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Before you build and run the above - you need to patch Nim. Then you can continue to build and run, see instructions in README.
If we take a closer look at the ardunimo.nim
file from the wrapper, it looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
nim.cfg
for the compiler directives) so there is no harm in importing stuff.NimMain()
C-function that Nim has generated so we can call it. This is for Nim’s runtime support to get a chance to initialize itself. The entry point of the binary is not controlled by Nim so we need to do this call manually.setup()
and loop()
so that it looks cleaner and to include the call to NimMain()
. Nim templates are AST based macros and very powerful.Just write your own blabla.nim and then build it using make clean && make PROJECT=blabla
, it should hopefully work, although remember that the wrapper is by no means complete nor at all tested. :)
blink2.nim
is similar to blink.nim
but creates at compile time, a Nim sequence with different pauses in it. nimicro.nim
is a first step at getting the Ni VM running, it fails still, but it can at least run the trivial program consisting of a single integer literal “1000” :)
For the moment this experiment is paused, I am now more focusing on ARM mbed OS based development, so I might end up redoing something similar, but with mbed OS instead of Arduino.
Anyway, hope someone found this interesting and fun!
]]>I have just started working at Evothings!
It’s a fun gang making slick development tools and libraries for building mobile IoT apps. Evothings is pushing the envelope on really easy mobile development focused on all the new nifty IoT-devices flooding us over the next few years.
In my last article I predicted Elixir to become big and now that I am learning the Evothings tools I wanted to make an Evothings example that uses Phoenix, the Elixir web server framework, as a backend, using its channels mechanism for websocket communication.
Coinciding with the release today of Evothings Studio 2.0 beta 2 (yay!) I will show step-by-step how to:
Since not everyone has a Linux server up on the internet you can skip step 3 and just use my public server :)
Let’s go!
So what does the app do? The original Evothings sample app (that we intend to modify) scans for BLE (Bluetooth Low Energy) devices nearby and shows a list of them with some information. It’s a very simple example of using your mobile to scan for devices like Estimotes, TI SensorTags or any other device using BLE which modern mobile devices support. It looks like this:
The twist we will add is to let the app send this information to a Phoenix server reachable on the internet, onto a pubsub channel, and then subscribe to that channel in order to populate the list on screen. This means all the participating mobile devices will show the union of all currently scanning mobiles.
The Evothings mobile app we are modifying is written in Javascript and runs inside the Evothings Viewer - a hybrid web view client that we will install via App Store or Google Play. The viewer is based on Cordova but enhanced in various ways with libraries specifically for IoT scenarios.
The communication between the app and Phoenix will go over Secure Websockets abstracted inside Phoenix Channels.
Phoenix is a new very exciting framework for building highly scalable and very robust web applications. It’s written in Elixir which is a fairly new language standing on the shoulders of a giant - Erlang.
Elixir compiles to Erlang bytecode so an Elixir application, when deployed, is in effect an Erlang application. This article is not explaining why you should be excited about Elixir, Erlang and Phoenix (you should!) but instead shows how to use Phoenix channels together with Evothings.
Step 1 is to install the Evothings Studio on your developer machine. Don’t worry, it’s just a zipped folder that you remove later if you don’t get hooked :)
Just go to evothings.com/download, download the correct zip file for your platform and unpack it somewhere. Then run EvothingsWorkbench
in the unpacked directory.
When the Workbench opens up you find instructions there on how to proceed under the heading “Getting Started”. Follow steps 1 and 2 and then continue reading here!
NOTE: Yes, we will make proper installers for Windows etc soon.
Just to make sure you have your mobile device connected, try clicking RUN
on one of the example apps listed under the Examples
tab in the Workbench. You should see it running in your viewer on the mobile device. Easy, right?
Time for Step 2, making our own variant of the “BLE Scan” example. We will copy and modify it:
Examples
tab, press the COPY
button on it.My Apps
tab you now see your own “BLE Scan” (note the directory path saying ble-multiscan). Press RUN
to see it running on your phone.CODE
button and find the index.html
file. Open it in any editor and change the title of the app on line 10 and line 53 to “BLE Multiscan”. Tada! Notice how it autoreloaded on your phone with the new name. The entry under My Apps
should also have the new name.Start Scan
on your mobile and see if you can find any BLE devices around, if not… this whole exercise will get fairly boring :)Ok, so we are up and running with Evothings. You have successfully modified and run your mobile app.
Now let’s get a Phoenix server going… There are multiple routes here you can take:
NOTE: You could in theory use your local dev machine but then you will also need to make sure your mobile device can reach your dev machine by ipname over wifi and you will also need to make a real cert for that ipname typically via StartSSL. This means it’s fairly impractical as you need a DNS name for your machine locally, and in addition you will need to get a proper cert for that specific name - which can also be done of course, but well, I don’t think many will try. :)
I run Ubuntu or Debian if I can choose, and the following instructions are verified for Ubuntu 14.04.3. We are actually simply following instructions found in the Phoenix docs so if you need more details, take a look there.
First we add the Erlang Solutions repo to get access to Erlang and Elixir, and then install the elixir package which sucks in Erlang too.
1 2 3 4 |
|
After that elixir --version
should report something like Elixir 1.1.1
.
Using mix
, the Elixir build tool, we can now install the Elixir package manager hex
which you can also browse online, just hit enter as confirmation.
1
|
|
It will be installed as an archive (a zip file basically) locally in your current user’s home directory. Hex is like “npm” for Elixir. The mix tasks inside the archive are then available in mix, so running mix --help
will now show a range of hex tasks available.
We then install Phoenix, the Elixir web framework. It is also packaged as an archive but there is no hardwired mix command like “local.phoenix” to install it, so we do it more explicitly using mix’s archive command.
1
|
|
After this we can list our installed archives and it should look like:
1 2 3 4 |
|
We also notice that mix --help
now shows the available task phoenix.new
.
Finally, we need nodejs and npm for various frontend tools that Phoenix uses like Brunch, and inotify-tools is for Phoenix live code reloading.
1
|
|
In order to get a “full stack” Phoenix setup I also include information on how to get PostgreSQL going, although at the moment we don’t use PostgreSQL in this example.
Start by installing it:
1
|
|
And we also need to configure the password for the default user postgres
that Phoenix likes to use by switching to the postgres user and run some pqsl:
1 2 3 4 5 |
|
Phew! But now we should have all we need to build a Phoenix application.
I am doing this in my home directory, but feel free to do it wherever you like. Let’s use the mix tool to create a Phoenix application called “multiscan”, hit enter on dependency question:
1
|
|
It should create a lot of stuff and end with something like:
1 2 3 4 5 6 7 8 9 |
|
The concept with having a tool like mix
using tasks to generate scaffolding is pretty neat to make sure all Phoenix apps follow the same structure, this is nothing new for Rails people of course but can be a new thing for some of us.
Ok, let’s do what we are being told. Ecto is the database abstraction in Phoenix (wrapping several different databases) so the ecto.create
task will create a database:
1 2 |
|
This may prompt to install rebar, which is just fine. Then it should compile the whole application and end saying:
1 2 |
|
And hey, run some tests:
1
|
|
You should among other things see 4 tests, 0 failures
in green.
And we can also fire up our application which will start serve by default on port 4000:
1 2 |
|
Cowboy by the way is the Erlang HTTP server that Phoenix uses. Apache… Cowboy… you get it.
Then check all is working by surfing to http://localhost:4000!
It should look like my server looks, only difference being my server runs HTTPS on port 1443.
Note that you can ignore this section, but I just wanted to mention that you can easily have Evothings handle your app while it is contained in a subdirectory of the Phoenix server application.
The Evothings app is just a directory and Evothings Studio can keep track of several apps under My Apps
regardless of where they are on your hard drive. This means we can easily maintain the whole system in a single git repository.
If you are making the Phoenix app on a server then this advice does not really apply. I did all my coding on my laptop and then cloned it over to my public server in the end.
But if you want to do this, let’s move the application into the Phoenix file tree under the name app
:
1
|
|
Then we update Evothings Studio’s notion of where the app is by removing it from My Apps
by clicking the (x) - yeah, no worries. Then add it back to Evothings Studio by dragging the file ~/multiscan/app/index.html
and dropping it on Evothings Studio. Evothings will pick up the directory path for index.html
and will consider that directory to be the app.
Finally, you can do the git dance inside ~multiscan
:
1
|
|
Next we basically follow the guide on channels in the Phoenix documentation. First we enable a channel we call “scan” by adding the file web/channels/scan_channel.ex
:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Above we can see that we create our own module called ScanChannel
and use the Phoenix.channel
module inside it to gain access to those functions. Then we define two variants of our join
function and we use pattern matching on the first argument to decide which one is used!
The first one basically says that the topic:subtopic “scan:public” is fine for anyone to join, we return the tuple {:ok, socket}
signifying all was fine. :ok
is the syntax for an Elixir atom which is basically the same as a symbol in Ruby or Smalltalk.
The second definition of join is nifty indeed. The underscores signify that we ignore those arguments, but the first parameter declaration is written as a concatenation of the string “scan:” with “something we ignore”.
Hehe, yeah, <>
for string concatenation sure made my eyes widen too, and especially in the context of pattern matching, but dispatching on multiple functions like this is darn neat.
Then we modify a single line in web/channels/user_socket.ex
to add the above channel. Click “file” if you want the full raw file instead of the diff.
1 2 3 4 5 6 7 8 9 |
|
For our mobile app the following is not really needed, but Phoenix also serves a web frontend that can join the same channel. Let us also enable this by including socket.js
in our web/static/js/app.js
which represents our web application:
1 2 3 4 5 6 |
|
…and then edit socket.js
to have it join our new channel:
1 2 3 4 5 6 7 8 |
|
We can then verify we have a working channel by going to the Phoenix web application at http://yourmachine.com:4000 (or try my server at https://padme.krampe.se:1443) and press CTRL-ALT-J
(typically) to check the console where it should say Joined successfully
. Wohoo!
Ok, but… now we want to make the Evothings mobile app talk to this channel! Going back to our “BLE Multiscan” application we need to:
The javascript client side code for Phoenix channels is phoenix.js
. We need to include this library in our Evothings app, but it is written in ES2015 (aka ES6 or ECMAScript 2015 or ECMAScript 6), the latest version of Javascript, and this isn’t fully supported by browsers yet so you must use a transpiler to make it work. Whatever one may think of the.. feature explosion in ES6 - it’s probably wise to start learning it.
Our Evothings app is however written in plain ES5 so to avoid rewriting it we would like to use phoenix.js
transpiled to ES5. Phoenix already has Brunch integrated which is a neat “build tool” for the web stuff, and it in turn is configured out-of-the-box to run the Babel transpiler on all js files in order to compile any ES6 code to ES5 (good ole Javascript).
In other words, we can already find a ES5-compiled version of phoenix.js
in our filetree that we can copy and stuff into our mobile app.
1 2 3 |
|
The last one is the regular source code in ES6 and the first one is the Babel translated one in ES5 - which is the one we want. We copy it into our Evothings app
:
1 2 |
|
… and then we include it as a separate lib in index.html
:
1 2 3 4 5 6 7 |
|
… so we can require it at the top in app.js
. Next up is modifying the application itself. Here are all modifications I made with explanations below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
|
IMPORTANT: On line 34 you will need to stuff in your own ipname for your Phoenix server, and port. Or else you will connect to mine, which is of course just fine too :)
No magic going on here really, we simply push any discovered device as JSON back to Phoenix on the topic. And we also show any incoming devices on the same topic on the display list.
When running the Evothings Viewer like we do, via Evothings Studio, the app is actually served via HTTPS to the viewer from Evothing’s proxy servers running in the cloud. So if the app is meant to connect to some other server (like our Phoenix server) using websockets, it needs to also use proper HTTPS, otherwise it will not work.
So we need to get our Phoenix to talk HTTPS with a proper cert, self signed doesn’t cut it for secure websockets.
I first tried CACert.org, but that … failed in various ways. Then someone hinted that StartSSL actually gives you ONE fully proper cert for free! And sure enough that worked great. So get a free one from them matching the ipname of your server - it was fairly easy to do.
You will however need to remove the pass phrase, but I solved that easily with:
1
|
|
Then we can configure Phoenix to use this cert by modifying config/dev.exs
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Of course you should use your own host and port on line 5 above. And then I added a run.sh
that looks like this:
1 2 3 4 5 |
|
… so in the local directory cert
you need to place those three files from StartSSL. Then just run it and with some luck it will start serve on HTTPS :)
Now, for those approximate 3 people out there that bothered reading this far … :) Time to see if it works!
You can try this out in different ways:
padme.krampe.se:1443
Start up at least two devices with the app, then if things work you should be able to start scanning on any of them and they should both quickly show the devices found. Note however that if others are using my server you will see their devices too. The following movie shows it working:
Please give feedback in the comments below and I can adjust this article accordingly! You can also find me and the rest of the Evothings team at gitter or on #evothings at freenode.
An obious extension to this experiment here would be to add a web frontend to this in Phoenix so that you can just surf there to see all scanned devices in realtime, and of course throw some Ecto love at it to make some stuff persistent.
Hope you found this interesting!
regards, Göran
]]>