xo tech/

XO Group Inc's Technology Blog. We build awesome products for brides [theknot.com], couples [thenest.com] and moms [thebump.com]. Our offices are in NYC and Austin.

XO Group Inc's Technology Blog. We build things.

API Craft Talk: Evolving REST for an Internet of Things World

This Wednesday, Todd Montgomery who’s been involved with acronymic government agencies for which we still have a lot of respect  (ex-NASA researcher) will be giving a talk on the Internet of Things and REST APIs. 

The talk will be at the XO Group offices on the 25th floor at 6:30PM Wednesday July 16th. It will be gently catered with pizza, beer and wine. Please RSVP for the Meetup event so that we know that you’re coming. 

The internet of things world is quite more realtime than the internet of internet world which was described in the original REST articles. In this talk Todd will be focusing on WebSockets the quaint “HTML5” technology that does more than just make chat clients better. The communication of things has implications when designing APIs: due to a wider fluidity of state changes (i.e. a thing responding to weather changes is effectively responding to chaos), real time requirements and huge data availability that impact scale and hopefully an even more standardized protocol (imagine redesigning your thing each time a new protocol version becomes available).

Todd is currently chief architect of Kaazing, a company in the middle (and going both ways) of the event driven communication highway, and thus knows things 1 to 1000 about this topic.


Developing with Ruby on Rails

I had previously talked about some of my frustrations with getting rails up and running. I’ve since had a chance to start building out my first gem and app, and I had a few thoughts while working with Rails:

Gems are great. Adding features that are used all the time across many sites saves so much time.

Ruby is pretty intuitive once you know the basics of the syntax, and makes it easy to figure out what you can and can’t do with an object simply by viewing its methods.

Rails convention is something that you just have to know sometimes. I noticed that while developing, there were times that were a little counterintuitive to me because function is sometimes determined through convention. For example, when rendering an action, it will automatically render a view of the same name when you don’t specify a view to render.

It’s nice that the gem version that’s installed is somewhat sticky. It’s still safer to tell the app what version you want to prevent unwanted upgrades in version.

Nesting layouts is pretty easy, and a very handy way of handling sub-containers for your content. The implementation in Rails Guides allows for developers to either pass in content with your sub-layout wrapped around it, or it uses the content directly from the views. See: http://guides.rubyonrails.org/layouts_and_rendering.html#using-nested-layouts

Overall, I’ve had a pretty positive experience so far, save for the error messaging. It does ok pointing you to where the error is most of the time, but it’s still pretty vague to me what the error actually is.


When ruby-x.x.x just won’t install…

I’m new to Ruby and ROR. I’m still learning a lot about it, but so far, I have been enjoying developing using it, but it hasn’t been without its growing pains.

When I started learning about Ruby and Ruby on Rails, I began to see why developers who use Ruby love it so much. It’s simple syntactically, and does a lot for very little code. On top of that, it’s constantly being updated with new functionality, partly in part due to an active, passionate developer that wants to expand what it can do. However, this constant updating has led to an annoying problem.

Not too long ago, while attempting to run a Rails app I had freshly downloaded from Git, I was notified my version of Rails was too old. “No problem”, I told myself. “I’ll just use trusty old RVM to update, and all will be well!”

An hour and a half later, I couldn’t install the newer version of Rails, constantly running into a vague message about an error running some file. What the heck is going on? RVM had seemingly downloaded a bunch of files, but why couldn’t I run them?

It turns out the reason why it didn’t run was because the file wasn’t there. The error message and the logs do not indicate this. RVM didn’t download those files because it didn’t know it had to. The solution, it turns out, was to use homebrew to check for required files, which was used to install Ruby and Rails in the first place. By adding a parameter at the end of the install command, homebrew would make sure all the necessary files are there for a successful install.

rvm install 2.x.x —autolibs=brew

Live and learn. Here’s to hoping the error messaging/logging improves, whether it be on the rvm side or the osx side.

Thank you for Entering to win a ticket to Goruco!

We have chosen and notified our winners, thank you so much for participating!  Please be sure to look out for updates from the team, on events and for more fun and cool give aways!


The XO Tech Team

Speeding up Rspec integration testing with the VCR gem

In today’s test driven development paradigm, of all the specs that our application needs to be subject to, integration testing can be one of the most time-consuming. This is particularly true if said integration consists of issuing calls to a web-service’s RESTful API. Our test flow is bottle-necked by the service’s request/response times and availability.

VCR Gem to the rescue

I was introduced to this Gem by Wojtek Mach a very talented Ruby consultant that my team had the pleasure to work with. We were commissioned to write a Gem that would wrap calls to one of our web services. This meant that by the library’s very nature, almost all specs would be integration tests consisting of calls to its API. During our pair- programming sessions, the tests started to build up and the time consumed by each call added an unnecessary burden to the rig. Not to mention that if any of the services that were on the QA environment, went down due to varying reasons (such as a developer on another team deploying a release), our testing would be blocked, until the service was once again available. Furthermore, it became apparent that most tests were repetitive in the request and the response’s payload. Wojtek decided to use VCR and mitigate these issues.

The VCR Gem integrates snuggly into rspec. At its core, this library saves the full request / response information in yaml files that are later used instead of the real calls to the service. These files, called “cassettes”, are automatically updated whenever the specs change. VCR knows to look for calls issued by popular http libraries like HTTParty. It also neatly organizes the payload information in files and folders named in the same manner as the specs hierarchies. Finally it is very easy to configure and implement.

A Working example

To illustrate the Gem’s usefulness, let’s suppose we have a search application that calls the Google Web Search API. As part of our tests we would have to issue real http calls to the service. If we are just doing 2 or 3 calls, it’s not really that bad. But what if we need more than 30 or 100, and each one takes approx. 200-300 ms? We could potentially be adding more than 10 seconds to the total test time. What if suddenly the service or the connection goes down? We would be faced with our tests getting stuck for very long stretches of time, waiting for the endpoints to be available, or just crashing due to timeouts. I created thisWeb Searcher Ruby example. The web_searcher.rb class contains 2 methods: “search_urls” and “serch_titles”. They both issue a call to the Google API, and parse the response’s urls and titles. It has a series of tests for each method.

Implementing VCR

First we need to add vcr to our gemfile:

group :test do
    gem 'vcr'

Next, if we use a spec_helper file, we can dedicate a section to configure vcr:

require 'vcr'

VCR.configure do |c|
    c.hook_into :webmock
    c.cassette_library_dir = 'spec/support/vcr_cassettes'
    c.allow_http_connections_when_no_cassette = true if real_requests
    c.default_cassette_options = {:record => :new_episodes} 

In this block we set options like where the cassette files will be stored (‘spec/support/vcr_cassettes’), which library to hook into and the default cassette’s behaviour.

It would also be nice to have a switch to toggle between real calls to the api and the pre-recorded data. So we add a section for reading the REAL_REQUESTS environment variable. When present, the cassettes will be “ejected”, real calls made and payloads updated in the files.

real_requests = ENV['REAL_REQUESTS']

RSpec.configure do |config|
    config.before(:each) do
    end if real_requests

After configuring, the last piece of the puzzle is telling the specs to use vcr. The nice thing about it is that all we need to do is add “:vcr => true” as follows:

describe WebSearcher, :vcr => true do

And that’s it! VCR will replace http calls made by HTTParty with pre-recorded data, when available.


Let’s see this in action. On our first run, we call rspec:

$ rspec

This would issue real calls to the web service. We notice the total run time in the console:

$ finished in 0.31504 seconds 

In this run, VCR wrote the request / response data on cassette files. If we navigate to “spec/support/vcr_cassettes/WebSearcher”, we will notice a folder was created for each test scenario (the “describe” statements), in this case “search_titles” and “search_urls”. Then if we look into “search_titles” we will notice a folder for the “when_searching_with_a_valid_query” context, with the “returns_at_least_1_title.yml” cassette file.


if we inspect the file’s contents:

- request:
    method: get
    uri: http://ajax.googleapis.com/ajax/services/search/web?q=tesla%20model%20x&v=1.0
      encoding: US-ASCII
      string: ''
    headers: {}
      code: 200
     message: OK
      - no-cache, no-store, max-age=0, must-revalidate
      - no-cache
      - Fri, 01 Jan 1990 00:00:00 GMT
      - Wed, 28 May 2014 17:11:42 GMT
      - text/javascript; charset=utf-8
      - '200'
      - nosniff
      - 1; mode=block
      - GSE
      - 80:quic
      - chunked
      encoding: ASCII-8BIT
      string: !binary |-        eyJyZXNwb25zZURhdGEiOiB7InJlc3VsdHMiOlt7IkdzZWFyY2hSZXN1bHRD

we’ll notice the full http request/response data.

Now if we call rspec again, on this run it will use the pre-recorded http data. So after it executes,  the console will prompt a shorter run time:

$ Finished in 0.01381 seconds.

So, in this case 0.01381 vs 0.31504. Almost 22 times faster!!

Now let’s say that for this run we want to issue real calls. All we would need to do is use the “REAL_REQUESTS” switch:

$ REAL_REQUESTS=true rspec

…and once again observe how the time increases, due to the endpoints being hit for real.


In this example, a gain of a few milliseconds might not seem much for just 2 http calls. But if our tests consist of many more, the speed increases will be significant. The VCR Gem provides an easy and fast way of improving our spec performance and not necessarily having to rely on real http calls.

Thank you and stay tuned for my next post!

Alexander Copquin.


XO Tech is hosting API Craft NYC meetup with QConn speakers

XO Tech is proud to be hosting API Craft NYC meetup about API Design and Monitoring with QConn Speakers Mike Amundsen and John Musser: 

• Mike Amundsen, Principal API Architect at the Layer 7 API Academy and author of the O’Reilly book RESTful Web APIs, will present “A Methodology for API Design.”

• John Musser, founder of programmableweb.com, and now CEO of API Science, will present “What’s Hot, What’s Not in API Monitoring.”

Please sign up and join us in the XO Cafe, have some pizza, beer and talk APIs.



Working with Amazon Simple Notification Service

Hi! In this,the first of a 2 series of posts, I would like to talk about the Amazon Simple Notification Service web service we use here at XO. What it is, how it works and how to configure it.

Amazon Simple Notification Service (SNS) provides a convenient means for distributing data between applications. SNS is organized in user created “Topics”. Applications that generate the data to be distributed are called “Publishers” and applications that will consume the data are called “Subscribers”. The later can be HTTP/s endpoints, email, SMS, Mobile Apps or Amazon Simple Queue Service (SQS).


Use case

As a use case, let’s say we have an application that manages user accounts. Now let’s suppose that we have other applications that need to be be “aware” of changes in user data. For example a newsletter mailing system, that would need to know when new users are created, their information updated or their accounts deleted. Or an analytics tool that creates reports on user statistics that manages it’s own database.

By using SNS, we can have these systems talk to each other in a loosely coupled fashion. The accounts managing application would be the publisher. Then the mailing and analytics systems would be subscribers to the SNS topic.


The publishing application can push user data changes to the topic in the form of messages by using one of the many SDK libraries (Ruby, .NET, Jscript, etc…) . The Subscribers can, for example,  wrap their API around http REST endpoints that will parse the incoming messages from the topic.

Creating an SNS Topic and Subscriptions

We can create SNS topics and subscriptions through the AWS console.



This is our home screen to access all services in the AWS constellation. Let’s go ahead and click on “SNS Push Notification Service”


On the following screen, click on “Create New Topic”image

We’ll call the topic “User_Data_Updates” . Click on “Create Topic”.


Take note of topic ARN:


This key generated by AWS is the topic’s unique identifier, used by both publishers and subscribing applications.

Now, to add a subscription, click on “Create subscription”


SNS allows different types of subscriptions or “protocols”, such as http, https, sns, email or SQS queues. We will choose HTTP and enter the endpoint’s url. Finally, click on “subscribe”


The subscription will show up on the console with a status of “Pending confirmation”


Finalizing the subscription and message formats

In order for endpoints to begin receiving publisher messages, they need to “confirm” the Topic subscription to SNS.

Let’s talk a little about the message formats. SNS messages to subscribers come in 2 flavors:

-Subscription confirmation


Subscription confirmations are sent only once, when an endpoint is subscribed to the topic. This is what the message looks like:

x-amz-sns-message-type: SubscriptionConfirmation
x-amz-sns-message-id: 165545c9-2a5c-472c-8df2-7ff2be2b3b1b
x-amz-sns-topic-arn: arn:aws:sns:us-east-1:123456789012:MyTopic
x-amz-sns-subscription-arn: arn:aws:sns:us-east-1:123456789012:MyTopic:2bcfbf39-05c3-41de-beaa-fcfcc21c8f55
Content-Length: 1336
Content-Type: text/plain; charset=UTF-8
Host: example.com
Connection: Keep-Alive
User-Agent: Amazon Simple Notification Service Agent

 ”Type” : “SubscriptionConfirmation”,
 ”MessageId” : “165545c9-2a5c-472c-8df2-7ff2be2b3b1b”,
 ”Token” : “2336412f37fb687f5d51e6e241d09c805a5a57b30d712f794cc5f6a988666d92768dd60a747ba6f3beb71854e285d6ad02428b09ceece29417f1f02d609c582afbacc99c583a916b9981dd2728f4ae6fdb82efd087cc3b7849e05798d2d2785c03b0879594eeac82c01f235d0e717736”,
 ”TopicArn” : “arn:aws:sns:us-east-1:123456789012:MyTopic”,
 ”Message” : “You have chosen to subscribe to the topic arn:aws:sns:us-east-1:123456789012:MyTopic.\nTo confirm the subscription, visit the SubscribeURL included in this message.”,
 ”SubscribeURL” : “https://sns.us-east-1.amazonaws.com/?Action=ConfirmSubscription&TopicArn=arn:aws:sns:us-east-1:123456789012:MyTopic&Token=2336412f37fb687f5d51e6e241d09c805a5a57b30d712f794cc5f6a988666d92768dd60a747ba6f3beb71854e285d6ad02428b09ceece29417f1f02d609c582afbacc99c583a916b9981dd2728f4ae6fdb82efd087cc3b7849e05798d2d2785c03b0879594eeac82c01f235d0e717736”,
 ”Timestamp” : “2012-04-26T20:45:04.751Z”,
 ”SignatureVersion” : “1”,
 ”Signature” : “EXAMPLEpH+DcEwjAPg8O9mY8dReBSwksfg2S7WKQcikcNKWLQjwu6A4VbeS0QHVCkhRS7fUQvi2egU3N858fiTDN6bkkOxYDVrY0Ad8L10Hs3zH81mtnPk5uvvolIC1CXGu43obcgFxeL3khZl8IKvO61GWB6jI9b5+gLPoBc1Q=”,
 ”SigningCertURL” : “https://sns.us-east-1.amazonaws.com/SimpleNotificationService-f3ecfb7224c7233fe7bb5f59f96de52f.pem”

Notice the following 2 headers:

x-amz-sns-message-type: SubscriptionConfirmation

x-amz-sns-topic-arn: arn:aws:sns:us-east-1:123456789012:MyTopic

The first header tells our application that SNS is awaiting a subscription confirmation. The second header has the topic’s ARN.

To confirm the subscription, the endpoint needs to send an HTTP GET request to the “SubscribeUrl” included in the message’s request payload.

Therefore, our endpoint needs to identify both the topic arn (to make sure it is the right topic) and the subscriptionConfirmation message.

Once the GET request is issued, we can verify that the subscription is confirmed, by seeing on the console that the topic assigned a Subscription ID to the endpoint.


Once the endpoint is “subscribed”, messages will start being sent to it. This time, they will be of type “Notification” and this is what they will look like:

x-amz-sns-message-type: Notification
x-amz-sns-message-id: da41e39f-ea4d-435a-b922-c6aae3915ebe
x-amz-sns-topic-arn: arn:aws:sns:us-east-1:123456789012:MyTopic
x-amz-sns-subscription-arn: arn:aws:sns:us-east-1:123456789012:MyTopic:2bcfbf39-05c3-41de-beaa-fcfcc21c8f55
Content-Length: 761
Content-Type: text/plain; charset=UTF-8
Host: ec2-50-17-44-49.compute-1.amazonaws.com
Connection: Keep-Alive
User-Agent: Amazon Simple Notification Service Agent

 ”Type” : “Notification”,
 ”MessageId” : “da41e39f-ea4d-435a-b922-c6aae3915ebe”,
 ”TopicArn” : “arn:aws:sns:us-east-1:123456789012:MyTopic”,
 ”Subject” : “test”,
 ”Message” : “test message”,
 ”Timestamp” : “2012-04-25T21:49:25.719Z”,
 ”SignatureVersion” : “1”,
 ”Signature” : “EXAMPLElDMXvB8r9R83tGoNn0ecwd5UjllzsvSvbItzfaMpN2nk5HVSw7XnOn/49IkxDKz8YrlH2qJXj2iZB0Zo2O71c4qQk1fMUDi3LGpij7RCW7AW9vYYsSqIKRnFS94ilu7NFhUzLiieYr4BKHpdTmdD6c0esKEYBpabxDSc=”,
 ”SigningCertURL” : “https://sns.us-east-1.amazonaws.com/SimpleNotificationService-f3ecfb7224c7233fe7bb5f59f96de52f.pem”,
 ”UnsubscribeURL” : “https://sns.us-east-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:123456789012:MyTopic:2bcfbf39-05c3-41de-beaa-fcfcc21c8f55”

Notice that now the “message-type” header has a value of “Notification”.

The message’s actual data is now present on the “Message” key on the payload.

We can test the endpoints by manually publishing to the topic. On the console, click on the “Publish” button. Fill in the form, and click on “Publish Message”.


Then verify with whichever debugging / monitoring tool you are using, that the messages indeed are arriving.

With these simple steps, we can very quickly create distributed architectures that allow our systems to be loosely coupled. We can dynamically allocate resources to consume data in different ways, with ease, without the need to modify other participating systems.

On my next post I will get into the details of implementing http endpoints that subscribe to an SNS topic.

Thank you for reading and see you on my next post!

Alex Copquin


Creating SNS subscription endpoints with Ruby on Rails

Hello again! This post is the second of a 2 series on “Amazon Simple Notification Service”. Here I will describe a simple way to implement with Ruby on Rails, the basics of a REST http endpoint that will subscribe to an AWS SNS topic.

note: for the sake of simplicity, my Ruby examples here might not adhere to “Best practices” (such as placing values in config files instead of hard-coding them). I hope the purists out there will forgive my blasphemy..

Our endpoint needs to do 2 things:

-Recognize SNS subscription messages and send subscription confirmations.

-Recognize notification messages, parse the content and perform the intended operations on the data.

Lets assume our API has a “/users” endpoint subscribed to our SNS service. On the previous post, our SNS arn was arn:aws:sns:us-west-2:867544872691:User_Data_Updates. Now lets also assume we’ve already created a RAILS application and have the routes and controller in place for the “users” resource.


For this example I will be using the ‘json’ and ‘httparty’ ruby gems.

require 'json'
require 'httparty'

The first thing we need to do is give our users_controller “create” route the ability to send out subscription confirmations. The method must be able to recognize the “SubscriptionConfirmation” value for the  “x-amz-sns-message-type” http header coming in the request. It must also be able to parse the value for the “x-amz-sns-topic-arn” header. Let’s go ahead and assign those 2 to method variables.

# POST /users
def create
    # get amazon message type and topic
    amz_message_type = request.headers['x-amz-sns-message-type']
    amz_sns_topic = request.headers['x-amz-sns-topic-arn']

Next, let’s make sure we are responding to calls from the right SNS topic:

return unless !amz_sns_topic.nil? &&
    amz_sns_topic.to_s.downcase == 'arn:aws:sns:us-west-2:867544872691:User_Data_Updates'

Next, let’s parse the request body into a hash for later use:

request_body = JSON.parse request.body.read

Now let’s verify that this is a subscription confirmation and if so, send it.

# if this is the first time confirmation of subscription, then confirm it
if amz_message_type.to_s.downcase == 'subscriptionconfirmation'
    send_subscription_confirmation request.body

I created a separate “send_subscription_confirmation” method that simply sends a GET http request to the SubscribeUrl url defined in the incoming message’s payload:

def send_subscription_confirmation(request_body)
    subscribe_url = request_body['SubscribeURL']
    return nil unless !subscribe_url.to_s.empty? && !subscribe_url.nil?
    subscribe_confirm = HTTParty.get subscribe_url

If our subscription confirmation goes out fine, from now on, all incoming messages will be of type “notification”. So, all that remains for our controller is to recognize these messages and perform the intended process on the data (which has been previously parsed into the “request_body” hash)

if amz_message_type.to_s.downcase == 'notification'
    do_work request_body

Using this simple template, you can very quickly create new subscriber applications that consume the SNS data and dynamically assign or remove them from the data stream, allowing for a very powerful distributed data architecture.

Thank you for reading and see you on my next post!

Alex Copquin

My Flatiron Ruby experience- Cont’d

I mentioned in my last blog post that i need to dig further into REST and MVC. Over the weekend I had some time to catch up. REST (Representational State Transfer) acts as an interface between client server relationships. Central principle of REST is the resources. And it is the interface with which we interact with these resources. One of the ways to interact with resources is through http verbs:





 Now coming to MVC  (Model View Controller) i have a better picture now. It is a design pattern for applications that have both logic and visual representation of the logic such as web apps.  When an http request comes in, it is passed to the Controller, which interacts with Model, Model retrieves the data from the database and makes it available to the Controller which then is supplied to the View and the template for the View is generated.

We also went over scaffolding, which created files like Migrations, Model, Controller, and View etc. I see scaffolding as rapidly creating the templates necessary to build the application.

I found the rails guide here: http://guides.rubyonrails.org/getting_started.html really helpful in understanding the concept. It starts from scratch and the steps are easy to understand. This finishes the week 3 of our training.

Week 4 we started with Active Record Migrations, which allows us to create DB schema. Rails comes with SQLite DB which is a file based DB.  Active Record Basics which refers in terms of MVC in model layer of the app known as ORM- Object Relational Database. This means for a relational DB it takes objects within our app and maps them to the relevant parts of DB.  Went over Active Record Associations.

 We started testing with Rails.  We got introduced to Bundler, which is an application dependency manager. We have third party libraries, which the application depends to run. Bundler helps us to run obtaining those libraries, managing their versions and sources. One of the popular places to get them from is rubygems.org. It hosts majority of gems.

In the class we started off with the lab work for Bundler, which consists of specifying the various gems in Gemfile with version, without version, twiddle wakka way, listing gem specifying remote git repository, using hash argument to gem method, specifying gem using block syntax. I really enjoyed working on this Bundler lab. May be because this was the lab where i accomplished a lot on my own except one last test which pretty much everyone in the class was stuck on.

Other Labs that I am working on consists of pending tests, so we need to first write the pending tests and then work on the failures. This will take some time getting used to.

So far that is all we have covered in 4 weeks. 2 more to go!

My Flatiron Ruby experience

Hello everyone! My name is Shilpa Saini and i am a QA engineer for our Inspiration products. I am attending the flatiron school learning ruby. I am a beginner fyi:) I was asked to write blogs on my flatiron ruby experience so here it goes:

First week was the week of our introduction to Ruby. The first one hour I had no clue what the coach was doing between terminal & sublime text but during the break I got hold of one of the coaches who showed me how to clone, go to the working file and open in sublime text.We started our class from a simple lecture that taught us the basics of ruby like how to convert up case to down case, reverse, length, size methods. Then we went over a sample rspec exercise.

Learnt how to rspec and start with all errors and slowly solve the failing tests. Once done we were given assignment. The first ever lab I did was Fizzbuzz. So far I have been writing methods for the tests that already exists. There were about 10 altogether. Some were pretty simple like solve the syntax error to say hello to return a phrase to some complex (well for me!) that required creating multiple methods then calling those methods into one method.

While working I noticed I was nesting my work. Learnt the best practice is to always go to my home directory & then clone & open the file that needs to be worked on to keep everything neat.

2nd day we were given videos to watch on arrays, looping, iterations, scopes. We went over few homework labs and started with new assignment. Created account on github and were shown some commands on how to work in a branch, merge to master, delete the branch after we are done. So far I have created 8 different methods on vowels, how to index first vowel in a word, method that returns if a given letter is a vowel using if else, case statements, no if else, single line codes, worked on badges and schedules, deli counter. Haven’t tried checking in any code yet to the given repository. But will do once I am all done with my assignments.

 My second week of Ruby training, we learnt about arrays, hashes, enumerable, enumerable advanced, blocks and yields and scarping. The labs we worked on included methods select and collect which i used to do the same thing picking similar items out of an array. Wrote method to add an item to the existing hash, to loop through the hash and return all the supplies etc,

We got introduced to object oriented programming Classes, Constants (variable), and Instance variables. Learnt how to define a class method. One of the cool thing we learnt is that we can define multiple methods on class and while doing so we do not overwrite the preexisting definition.

Coming to third week we started with Rails.

Got introduced to rack library. Learnt rack provides a interface between web serves supporting ruby and ruby framework. Walked through a minimal rack application, which required a rack library. Defined a class with a single method “call” that accepts a single argument env which is a data structure that contained all the information it need to know about http request that’s coming including the header and the body. The array contained the http status of the response, header and the body of response. Ran the rack app which starts this webserver called Puma. Going to the local host we saw it generated the body of the response. We changed the body and restarted the rack app, revisited the local host and on refresh we saw the new body response.

Summary of what i learnt is that when the browser sent the http request to Puma web server which communicates with rack apps called the call method of our rack apps and it sent the http request using rack and passed to our apps and our apps returned with this array which rack built back to http response and Puma took that response and sent it back to our browser.

Learnt that rack library is built on a design pattern called pipeline pattern which means is that when an web server passes an http request it does not needs to send an immediate response. It can pass the request to another rack until it runs out of rack apps and when one rack app does sends the response it passes through all rack apps and finally returns to the web server.

Had a brief intro to Sinatra. When an http request comes in Sinatra it matches the http path designated in request and executes the context of associated block.

Also got started with Ruby on Rails 2 important concept of REST and MVC. I need to dig into it more to better understand.

More to come in coming weeks, I will sign off for now. Hope you all had a great long weekend :) 


This weekend XO Tech was proud to host our first Non-Profit Hackathon, Hack Upon A Cause, here at XO Group’s HQ.   We had over 150 engineers, designers, and entrepreneurs in house working on sustainable technological solutions for four wonderful non-profits. Each of the organizations we chose to work with are helping to better the lives of women and teens around the globe - and we could not be happier to lend a hand in that effort!

We’ve had an absolute blast and can’t wait to see what everyone’s built! Check out some photos below from day one of the hackathon and stay tuned to see who won!


You can learn more about the amazing organization we partner with here and follow us on Twitter for more pics and updates at #hackuponacause 

CEO Robot


When our CEO David Liu can’t be in the office, we still get to see him zipping around the office on his sweet robot ride, ala Sheldon from the Big Bang Theory.

Embrace the frontend

In my last post I wrote about getting out of the “forms” business and into the “AJAX” business, which I view as a pivotal step in the development of Web applications that are both better managed and more responsive.

The problem is that once you are liberated from the tyranny of <forms> you find that you can no longer be just a “backend developer” that surfaces web pages.  You must learn the language of the frontend.  I know you love your strongly typed, server side, cozy blanket, and you can certainly go work for some monolithic death corp and write the same code for the rest of your life.  But if you want to develop interesting applications and stay current in the flow of software technology, get over it and start learning javascript.

One’s first steps into javascript invariably go as follows:

At that point you think you are being responsible and call it done.  A year goes by and you find that you have a lot of those files, with lots of duplicated code and/or duplicated functionality with different implementations.   You consolidate that into some horrible library that contains every “helper function” in the world and include the whole file on every page.  Probably minify it so when it breaks it’s a pain to figure out what’s going on and now you think you’re done. 

Well you’re not. You wouldn’t write your C# application like that.  In fact you’d probably go on some career-ending rant if you saw it being done (much as I am doing here with js).  You must treat you’re javascript application with respect if you want it to respect you.  Sure it lives for, perhaps 10 seconds, but those are pivotal seconds and that time can and will grow as you feel the joy of building an app on the client side.

So how do you treat it with respect?  Well, the KEY to success is to employ a pattern, a nice pattern, and stick with it for every page.  JS doesn’t have a very appealing, object oriented story out of the box, but there are a few patterns for creating objects or classes that are easier and more familiar to work with.  Some are better than others.  And there are several frameworks that provide implementations of these patterns and generally wrap them up with some other functionality.

As I mentioned in my previous post (not smart enough to know how to point to previous post here), I like very much the Knockout.js .  However, Knockout endorses a rather repugnant pattern of dispersing your javascript between your HTML and a simple javascript object.  You essentially (within Knockout’s attribute) put onclick -> fire this event in my ViewModel.  Their ViewModel is just a plain old javascript object with properties and functions on it.  The idea of putting a bunch of behavior inside of HTML is most unpleasant.  However, I am willing to concede the need for a two-way binding attribute.  The payoff is so great and the offense rather minor.  But when you start putting all manner of logic in the HTML attributes, it’s my loud opinion that, you have crossed the line, Jack.  Furthermore, the anemic Knockout ViewModel is not very helpful, when it comes to organizing your code.  Thus, I say that KO.js has an excellent two-way binding story but you should leave it at that.  Don’t let it take you down the dark road of HTML decoration.

I can see how KO would be compelled to provide other functionality so as to seem more like a “framework” than a “library”, and I can see how they would try to extend what is already working so well for them (e.g. the HTML decoration).  I can see this but you can’t make me use it.  Other “frameworks” Angular.js for one have taken a similar direction.  Angular is very popular, but their HTML wrangling makes KO look like a minor offender.   I won’t go any further into my objections.  If you want to debate, hit the comments.

In my next post I will write about how I use a combination of backbone.js and ko.js to create a best-of-both-worlds cocktail.

Furthermore, I am cross posting this to my personal blog if you like it so much you want to read it twice. http://cannibalcode.blogspot.com/

Shredding your forms

Using the <form> tag to wrap elements and then submit the data contained within worked in the 90’s.  Hell it worked in the early 2000’s.  But with the advent of AJAX techniques, the <form> element is now really more of a liability than a help.  The problems are as follows:

 While there are work_arounds for these problems that allow you to use AJAX to catch a form submit, you can take it from me it can be a byzantine nightmare to try to customize.

So when you throw out your horrible <form> tags and start using AJAX to post and get data from the server, you will find that, while it did a rather crap job of it, the <form> tag did at least harvest your values from the elements contained.  Without it, you must now query each element that contains data you want to post back.  While this is exactly where you get the benefit, it is also a pain.  Add to this the fact that, if you are using C# MVC, MVC expects that data to come back in a very precise and unintuitive manner, and you are now faced with a rather boring if not daunting task.  In fact, it can be so daunting I may do a blog post explaining how to do it.

Luckily, the solution is not only beautiful it is wonderful and awesome, all wrapped into one.  By employing a model binding framework like Knockout.js or one of it’s lesser cousins you can create two way binding between your DOM elements and a JSON object (heretofore referred to as the ViewModel).  This means that when you change the value in, say, a text box, the corresponding property on the ViewModel changes as well, and vise versa.  So now, when you want to submit your data via AJAX you don’t talk to the DOM at all.  Insteadm, you just submit the ViewModel object.  Again, the versa of this vise is that when you want to update the DOM you must merely speak, in it’s native tongue, to the ViewModel.

Creating this two-way binding, emancipates you to some degree from the business of mucking around in the DOM. I say to some degree because you most likely will still have to interact with the DOM to perform other actions: clicks, show/hide, fade out with pixels, etc.  Still if you can find a way to abstract that noise, you could quite possibly, write tests for your javascript logic without the incredible hassle of spinning up a browser and mocking your HTML.

I have glossed over A LOT of the implementation details in favor of a much higher lever ( and much shorter post).  I would be happy to write a post on the details should anyone ask.

In my next post I will discuss the strategy that I have found to be quite fruitful for employing two way binding without horribly polluting your HTML or creating a deep and vast plate of javascript spaghetti.

Furthermore, I am cross posting this to my personal blog if you like it so much you want to read it twice. http://cannibalcode.blogspot.com/