There was a hackathon for all new employees @MS – we won :)

I joined Microsoft IDC, Hyderabad last week 🙂

Joining a company with more than 100,000 employees is a lot more complex than it might sounds, and the existence of a strong on-boarding process is key to let the new hires perform as fast as possible. — Someone on Internet, Very true.

The Onboarding process involved a NEO (New Employee Orientation) program where we met some pretty great managers. It involved a lot of sessions, like on how to go ahead with the job life, what our career graph would and should look like, involved technical sessions on how much me should focus on QUALITY now, working with teams etc. They stressed a lot on how much we need to focus on the quality of the product we write and that’s like the most important lesson we took from the program. The program involved a 24 hr hackathon in the end. There were 12 teams & we were free to choose any track. Though we had mentors to help us with feasibility of the idea, to let us know if we should even proceed with the idea and with technical problems.

So we pivoted twice from our idea, luckily before the hackathon. Initially we were thinking about a CDN which would host React.JS (or Angular) components such that they are requested & downloaded independently by the browsers. The crux of the idea is an assumption that when a lot of websites would be using components, a lot of them would be using few same ones. So if they are cached independently in browsers it’d be a good optimisation. We dropped the idea as we realised it was pretty difficult to even express what the idea is. The we thought a open resume platform which stored all the resumes in some standard format and one can view resumes in template of their choice rather than creators. Finally we thought of building a platform for users to watch videos / movies together. It would involve creating a virtual room where the user can invite others and video would be streamed to everyone in sync.

So we were a team of 5 and all of us were pretty excited when Nirmal suggested this idea. We shared this idea with mentor and he was like “this is like hangout with one sharing screen with video what’s the USP?”. So we thought of adding a feature where we share reactions of every user in a room by tracking their expressions and facial coordinates and creating an avatar for them. This would use significantly lot less bandwidth than full video sharing with everyone. And best thing was we found an open source library “clmtracker” which extracts user’s facial coordinates and even find possible emotions like “angry”, “sad”, “happy”, “surprised”. So we knew the way forward.

Figure: Tracking face using clmtracker

So we knew the way forward. We created a node.JS server to act as a broker between clients. Other than that we used Socket.IO for two way communication between server and client, Express framework for routing and a few more packages for other tiny tasks. Most of the data was stored in memory at server while certain data was stored in a mongo db database essentially for storing states in case server crashes. On the client side we used the javascript api for youtube to control the player.

So the flow of or product is anyone can create a virtual room and the portal starts with a demo video. There is an option to add videos to a queue, which is pretty much a playlist but reorders based on voting by everyone in the room (a priority queue). Now a user can invite other user using email. As they join, the video is synced in their client as well. There was a chatting functionality to send text messages as you watch the videos. Also the client was continuously tracking users expression and sending data to server every 0.3 seconds. Server was broadcasting this information to other clients. At the end our hack looked something like this.

If any one of the user would seek the video to other time, it was also replicated on all users in the room so that everyone is viewing the same video at same frame. Also there was an option to play pause the video which was synced across all clients.

Figure: Demo of a virtual room

We used snap.JS library to construct SVG avatars based on facial coordinates. We were also able to reconstruct the movement of face in space. It looked something like this.

Figure: Reconstructing face as an avatar

And guess what, after working for 24hrs, w/o sleeping, having just pizzas, ice cream, soft drinks, lassi & briyanis we won 🙂

Figure: we won 🙂 — team: HOUSE STARK

Looking forward to work more on this platform.



Add a Comment

Your email address will not be published. Required fields are marked *