Sunday, January 12, 2014

ScaleSimple - An Open Source CDN

I am proud to announce that today we are open sourcing our project, ScaleSimple, an open CDN platform. Head on over to GitHub to see all the code. We have been chipping away at this for quite some time and at one point, thought it would be a viable SaaS business. However, running and supporting infrastructure while keeping our day jobs was certainly not an easy endeavor, so we agreed it was time to just open source what we had.

For a little background, the reason we came up with this platform was that we felt the gap in the market between "low end" CDNs and "enterprise" CDNs was too wide. The low end platforms were cheap, and very easy to use, but lacked any substantial feature set or customization. On the high end, the features were rich, but so was the cost. It was not viable for a startup who needed a deep feature set to go to a lot of these enterprise players. Now, maybe we are way off the mark, and the majority of people don't need the flexibility or customization that we envisioned, but either way, we built something. We felt the tools available were mature enough to allow someone to seriously consider having their own CDN Platform.

Our idea was to leverage Varnish, but also build a nice UI around it that allowed people to build very customized rules, without having to muck with VCL. This also gave us hope, that we can build more complicated VCL snippets or even custom VMODs, that would be hidden to the end user with a few drop downs and checkboxes to make things easy. Two examples of this are Token Authorization and GeoIP blocking that we built and are part of the open source code.

As it stands the platform is functional,but needs some love, specifically unit tests.  Not just on the rails/rspec side but also using varnishtest to ensure that all of our rules and new configurations would work using every permutation imaginable. We also need more documentation. Right now there is a fair bit of looking through code to figure out whats going on. We started with some basics, but its not there yet. We also need better first time user "bootstrapping" so that users can get up and running quickly without a lot of fuss. An installer would be amazing. We also need better per install customization, by using things like .env files for the rails app to make per install variables easier to configure.

Now that varnish has all this flexibility with vmods (even more so in the upcoming 4.0) , one of our hopes was that people would now have a place, dare I even say open marketplace, to submit things like vmods and new configuration ideas that would continually enhance the platform. One of the concepts we have in ScaleSimple is "templates", so that you can build a system wide ruleset that can be used to pre-configure new rulesets very quickly. Good examples here are things like wordpress, drupal, etc. It could be tedious to get all the nuances of these configurations right (dealing with cookies, admin login urls, etc) so having a template can really help here. Not to mention an easy to way to easily apply that template to multiple configurations for different hostnames.

We hope that the community finds the platform useful, and we hope to get a ton of activity to truly make scalesimple something great. We personally see a lot of potential to disrupt this space and be incredibly innovative. Please follow us on twitter at @scalesimple for updates to our progress and pull requests are welcome !

Update: made it to front of HN !  Continue the discussion on Hacker News

Cheers
Adam
@denen

Sunday, September 23, 2012

Platform update on Huffpost Live - Ask Our Tech Guy

Once again, having tons of fun with the hosts on HuffPost Live, my new project that just launched in mid August.  In a new segment, "Ask our Tech Guy", we open up the floor to the community to let them ask me anything they want about the platform my awesome team built.




Saturday, April 28, 2012

Using gproc and cowboy to broadcast messages via websockets in Erlang

Recently, I have been doing a fair bit of reading around event based programming and some of the design patterns associated with it, namely the Actor Model.  The basic concept is that instead of dealing with threads, locks and shared memory, there is no shared memory (everything is immutable), no side effects and thus there is no need for locks.  Instead of threads, Erlang, for example, will spawn tons of processes and if you want to have them communicate you do this by message passing, not shared memory.  The result here, is that this can produce much better concurrency on multi-core systems, for a bunch of reasons, one of them being that message passing between ultra-lightweight immutable processes is much more efficient than managing and context switching between threads.  Erlang has its own scheduler that will spawn one thread per core and manage all the processes in each core for you allowing you to scale with the number of cores with zero development (its part of the VM).

This is actually one of the key reasons I recently started exploring these design patterns.  CPU clock speeds have somewhat leveled off over the last 12-24 months and companies like Intel and AMD are focusing more on adding cores, rather than raw clock speed.  What this means, is that vertical scaling is not really increasing at the rate it once was and we have to think about programming for a true multi-core environment.  To explore Erlang, I took a real use case I had that I thought would be perfect for maximizing concurrency on a box, websockets.  The real-time web is inevitable and upon us, and with the dizzying amount of "event based, asynchronous" javascript frameworks, it should be obvious that this is the path of modern web application development - pushing data from the server to the client.

Now, websockets is a relatively new technology, and I am not even 100% positive that the RFCs are finalized as they have changed a few times already.  However, doing true bidirectional communication  between the browser and the server is something that I was very much interested in, so the research began....

In researching libraries, there seemed to be a few contenders. Misultin and Cowboy seemed to be the most prevalent that supported websockets so I dug in.  About a month ago there was a thread about Misultin stopping development (I am sure someone will pick it up) and while the benchmarks I read looked impressive, Cowboy seemed to have a much stronger and active development community.

So, my first task was to figure out how to build a websocket server that would establish a native websocket connection with the browser, reply to a message sent to it and also allow a background process to shoot a message to a bunch of connected clients.  All but the latter are pretty much straight out of the box.  Luckily, the creator of cowboy pointed out the module gproc, which enables you to keep a "process dictionary" of a set of processes (in our case a websocket connection) and give it a name to be referenced later.  Whenever a websocket connection is created, we register that connection and give it a name (the same name in fact, for every connection).  Then, anywhere in our code we can pass a message to that gproc name which will then fire the message over all the websocket connections that were registered with that name.  As long as our websocket handler recognizes this message, we pass it on through.  Enough talk, lets look at some code....

First things first, the way cowboy works, is you need to register a "handler".  This is essentially a module that gets called when a connection is made (a callback) that needs to implement a certain set of functions.  To implement a handler in cowboy you need a function that looks like this:

-module(my_handler).
-export([init/3, handle/2, terminate/2]).

init({tcp, http}, Req, Opts) ->
    {ok, Req, undefined_state}.

handle(Req, State) ->
    {ok, Req2} = cowboy_http_req:reply(200, [], <<"Hello World!">>, Req),
    {ok, Req2, State}.

terminate(Req, State) ->
    ok.


Now whenever an http call is made, we will pass through init/3 and then handle/2.  Here you can access all the necessary http req params (query, host, header, etc) and take some action.

Next up is our generic websocket code in the module.  The first thing we need to do is add some code to init to "upgrade" the connection to websockets.  The websocket specification says that if the browser supports websockets,  one of the things it initiates the connection with is a request with a header that looks like "Upgrade: websocket".  So in our init we look for this which allows us to tell cowboy to switch to a websocket connection instead of HTTP:



init({_Any,http}, Req, []) ->
    case cowboy_http_req:header('Upgrade', Req) of
                {undefined, Req2} -> {ok, Req2, undefined};
                {<<"websocket">>, _Req2} -> {upgrade, protocol, cowboy_http_websocket};
                {<<"WebSocket">>, _Req2} -> {upgrade, protocol, cowboy_http_websocket}
        end.

Now, this well tell cowboy to change the handler from http to websockets.  Our next step is to do a couple of things, first to define a "key" that gproc will use to store all the processes and secondly to register each new websocket connection against this key.  This will allow us to later reference every connection by this key so we can broadcast the message.  So first lets do a simple define of our key

-define(WSKey,{pubsub,wsbroadcast}).

Now, when using websockets in cowboy there are a few functions in the callback hander module, websocket_init,  websocket_handle, websocket_info, websocket_terminate.  We are going to focus on websocket_init (new connection) to register the connection and websocket_info which deals with messages over the pipe.  First, lets register the connection in websocket_init


websocket_init(_Any, Req, []) ->
        Req2 = cowboy_http_req:compact(Req),
        gproc:reg({p, l, ?WSKey}),
        {ok, Req2, undefined, hibernate}.

Here we call gproc:reg() which takes 3 parameters, p is the (property), l is the scope (local) and ?WSKey is our key to register this process against.  Since every connection will use the same key we now have an easy way to store and reference every connection.  Next, we update websocket_info to handle the message being sent over the connection.


websocket_info(Info, Req, State) ->
    case Info of
        {_PID,?WSKey,Msg} ->
                {reply, {text, Msg}, Req, State, hibernate};
        _ -> 
                        {ok, Req, State, hibernate}
    end.

Here we match on a message being sent over the websocket connection of the format {_PID,?WSKey,Msg}.  Once we match, we simply pass the message over the connection.  Our last step is to actually broadcast a message that contains this format.  Now, here is where you need to roll your own logic to determine how you want to ingest messages (queue, webservice, etc), but once your app gets the message you are waiting for all you need is one line to send the message:

gproc:send({p, l, ?WSKey}, {self(), ?WSKey, Msg})


This simply sends message "Msg" to all the processes that were registered with the key ?WSKey and then gets handled in websocket_info and thus passed over the websocket connection.

Now, you can obviously get clever and put some logic in websocket_init to determine which key the user should get so that you can segment your users.  You can also use the handle/2 method to do things like accept a query param like ?channel=X which would allow you to subscribe to certain keys via an HTTP call from the client side.  A lot of possibilities here.

Well, hopefully this helps someone get going with websockets, in my next post I am going to take this example to the next level and wire up an AMQP connection to consume messages in real-time and then broadcast those consumed messages over the connection.  Stay tuned...





Wednesday, November 30, 2011

Ubuntu error: key_from_blob

If you have seen this error before on Ubuntu (or any flavor of linux for that matter), after some annoying digging i finally resolved my issue.  When adding a new DSA or RSA key to the authorized_keys file of your remote ssh server, you may see the following error


Nov 30 11:58:56 li321-228 sshd[7292]: error: key_from_blob: can't read dsa key
Nov 30 11:58:56 li321-228 sshd[7292]: error: key_read: key_from_blob AAAAB3NzaC1kc3MAAACBALZa7U63gJeJm5zHAaP9x1fQKfRdvkbHukV6T8S+392Vs74gQTLn\n failed

Not noticing the "\n" at first glance, i realized that when i pasted the multi-line DSA key into the authorized_keys file, it had newlines in it.  Take out the newlines and you are set !


Sunday, January 23, 2011

An MD5 Hashing function for varnish

I was toying with varnish, a very cool open source web accelerator that seems to be getting a lot of attention recently. Since we have done some fairly complex caching setups at my current employer using a well known CDN, I figured I would dig in and seen how capable Varnish really was.

For starters, the documentation is unfortunately a bit slim. There is virtually nothing around on google and no real advanced or complex examples anywhere. So when you need to do some serious tinkering and you hit an error, you just need to go through some trial and error.

The first thing i noticed was that there were no builtin "hash functions" you can call, in particular MD5. I had a need to take 3 parameters from the query string (something you need to do with a regex) and concatenate them into an MD5 hash. Since Varnish didn't supply this functionality I figured I was pretty much out of luck. However, a little more digging and I discovered a couple of VERY interesting things. Firstly, the DSL that Varnish uses is basically compiled into C to be super fast. What this means is that there is the ability to put inline C directly into the config file. Thats right, you can basically wrap your C code with C{}C right in the config file and do almost anything in there. Now we are getting somewhere. But was I really going to find an implementation of md5.c and stick that in the middle of a config file? That seemed like serious overkill...

Then i discovered my second secret feature, the load_module. There is an example buried in the wiki on how to compile the MaxMind GeoIP library into VCL as a module and execute its function to query for a Country to IP Lookup. Ah, now we are talking. So what did I need to do? What anyone would do in this situation, grab an implementation of md5 in C and write your own library of course! So that is what I did. I downloaded the md5 implementation written by L. Peter Deutch (http://sourceforge.net/projects/libmd5-rfc/files/) and then wrote my own library. This involved a couple of steps....

First I had to write my own md5 library that i can expose to Varnish, which I conveniently named md5_hash. I had to create basically a C source file with the following contents:


char * md5_hash(char * md5_string)
{
int status = 0;

md5_state_t state;
md5_byte_t digest[16];
static char hex_output[16*2 + 1];
int di;

md5_init(&state);
md5_append(&state, (const md5_byte_t *)md5_string, strlen(md5_string));
md5_finish(&state, digest);

for (di = 0; di < 16; ++di)
sprintf(hex_output + di * 2, "%02x", digest[di]);

return hex_output;
}


Then i built a Makefile that turned this into my very own libmd5varnish.so to be used inside of Varnish. Now we need to load this into Varnish and make the md5_hash() function available. To do this you need to use inline C and place the following in your VCL file :


C{
#include
#include
#include

static const char* (*md5_hash)(char* str) = NULL;

__attribute__((constructor)) void
load_module()
{
const char* symbol_name = "md5_hash";
const char* plugin_name = "/etc/varnish/modules/md5/libmd5varnish.so";
void* handle = NULL;

handle = dlopen( plugin_name, RTLD_NOW );
if (handle != NULL) {
md5_hash = dlsym( handle, symbol_name );
if (md5_hash == NULL)
fprintf( stderr, "\nError: Could not load MD5 plugin:\n%s\n\n", dlerror() );
else
printf( "MD5 plugin loaded successfully.\n");
}
else
fprintf( stderr, "\nError: Could not load MD5 plugin:\n%s\n\n", dlerror() );
}
}C


Excellent. Now i have a function called "md5_hash" available to me via inline C. So how do I call it? Say you want to set a header that is the MD5 Sum of a string called "random blog post" (could just as easily be a header you extract via VRT_Get_Hdr). You stick this anywhere you need in your config:


C{
VRT_SetHdr(sp, HDR_REQ, "\006X-MD5:", (*md5_hash)("random blog post"), vrt_magic_string_end);
}C



Thats pretty much it, i did some basic testing and it works like a charm. To save someone else the hassle I opensourced the whole library I wrote and stuck it on github here https://github.com/denen99/libmd5varnish. Feel free to fork or improve it.

Hope this helps someone, I know I searched for hours with nothing in sight on how to solve this. There are a couple of posts on the mailing list that claim this will be natively party of version 3.0 but I didnt want to wait :-).

Good luck.

Wednesday, May 26, 2010

Using hazelcast with Jruby

So, i have been doing some reading on Hazelcast as I have been researching a bit on In-Memory Data grids. i spent some time with Gemfire (which is a very cool product) and then found this seemingly very close open source alternative. Its a linear scalable memory cluster that allows you to do a lot of very cool things in memory such as Hashmaps, Queues, Topics, Callbacks, etc. Since i have been toying with building some Java clients in Jruby i thought i would share some really simple examples of how i got up and running calling and interacting with the Hazelcast cluster from a Jruby client API.

To run hazelcast, simply download the latest version, cd into the bin/ directory and run "./run.sh". This should get your cluster up and running. My example uses the ip of 192.168.1.3:5701, you should substitute your own ip here.

Example 1: Basic Read/Write cache

This example simple connects to a hazelcast cluster and then writes 2 entries and then read 2 entries. Certainly not very valuable but shows how easily you could replace a memcache library, for example.

require 'java'
require 'hazelcast-client-1.8.4.jar'
import com.hazelcast.core.Hazelcast;
import com.hazelcast.client.HazelcastClient;
import java.util.Map;
import java.util.Collection;

class MyClass
def initialize
@client = HazelcastClient.newHazelcastClient("dev", "dev-pass", "192.168.1.3:5701");
@map = @client.getMap("default");
end

def write(k,v)
@map.put(k,v)
end

def read(k)
puts "Key: " + k + " Value: " + @map.get(k)
end

end

c = MyClass.new
c.write('key1','value1')
c.write('key2','value2')

c.read('key1')
c.read('key2')


When you run it here is the basic output as exected:

$ jruby test2.rb
Key: key1 Value: value1
Key: key2 Value: value2

Example 2: Enable event callbacks with hazelcast

This one i found a bit more interesting. Here, we register a callback for the "addEvent" method, which basically allows us to "subscribe" to the cluster interface that would give us a callback whenever an entry is added. There are also methods for eviction and updates as well, but kept it simple here for demo purposes. here i demo how when i add a new key on the console, i immediately get a callback in my jruby api script that a new item was added, very powerful stuff here.

require 'java'
require 'hazelcast-client-1.8.4.jar'
import com.hazelcast.core.Hazelcast;
import com.hazelcast.client.HazelcastClient;
import java.util.Map;
import java.util.Collection;

class MyListener
include com.hazelcast.core.EntryListener
include com.hazelcast.core.ItemListener

def initialize
end

def entryAdded(e)
puts "Event found : " + e.getKey + " value = " + e.getValue
end

end

class MyClass

def initialize
@client = HazelcastClient.newHazelcastClient("dev", "dev-pass", "192.168.1.3:5701");
end

def listen
sample = MyListener.new
map = @client.getMap("default")
map.addEntryListener(sample, true);
end

end


c = MyClass.new
c.listen


#######################
Hazelcast Server

hazelcast[default] > m.put 'key2' 'value2'
null
hazelcast[default] >


########################
Output

# jruby test.rb

Event found : 'key2' value = 'value2'

Thursday, November 19, 2009

Use mod_rails with Freebsd 7.0

Man, been a while but i forgot how much i hate FreeBSD. Reminds me of the old days where solaris refused to ship their OS with a compiler or a decent shell, or require you to update /etc/nsswitch.conf to use DNS. Oh wait they still they do that with Solaris 10.. Ok, sorry tangent.

So if you need to compile mod_rails (aka passenger) on FreeBSD 7 with apache 2 and you used the default port install then you need to recompile with

make USE_THREADS=yes

and you will avoid that very nasty "Bus Error (core dumped)"

The other option is to use the Worker MPM and you should be good as well. I still cant mod_proxy to work but thats another issues.

Fun times.