Site icon Filestack Blog

Filestack: Secure Javascript Library From Untrusted Clients

Browser-side Javascript is a lot of fun, but there’s a serious limitation when it comes to security because the source is publicly viewable.

Given that one of the things we do here at Filepicker.io is provide a javascript library that helps developers upload files into their S3, it’s important that the developers have a way to control what is being stored and who is doing it. We recently released a policy-based security protocol on our API, and here’s some of the things we learned.

In summary:

1. Security is based on knowing, doing, or owning something that no one else can

For instance, knowing a password, being able to sign a check with your signature, or having your fingerprint are things that others cannot do (in theory). In javascript this means giving a secret-key server side and then using it to generate tokens that the javascript can use on your behalf.

2. Security breaks when humans are involved

Sometimes security fails because the math is flawed or the algorithm can be cracked (in hardware or software). However, most of the time, it’s a human that messes things up. For a library with configurable levels of security, this means providing good documentation, code examples, and debug endpoints.

The basics: Security is based on knowing, doing, or owning something that no one else can

This is the crux of security. If you know, can do, or own something that no one else can, then we can verify that you are who you say you are. For instance, a password is something that only you know which allows you to login to a website.

This fact is used on everything from digital encryption to physical security, but we’ll be focusing on message authentication. For a javascript library, the most important thing is ensuring that every call, both read and write, were made by an authorized user (and not someone who just copied your key into their website).

I am whoever you say I am: Identity and Authentication

The two main categories of authentication are symmetric key cryptography and public key cryptography. The symmetric version is where two parties have a shared secret. This shared secret allows both parties to encrypt and decrypt.

Public key cryptography instead has 2 different keys. One allows for encryption and the other for decryption. In this case, the decryption key is public so everyone can decrypt. The encryption key is kept secret; being able to encrypt a message is proof that they are who they say they are. The common analogy for this is a wax seal; only the sender knows how to make the wax seal, but anyone can open the message.

In order to make things simpler for developers who are integrating Filepicker.io, we decided to use symmetric. The process of sharing a secret is easy enough since our developers can login to our site to see the secret. (login is another great example of a secure authorization scheme. Thankfully we didn’t have to design our own protocol and just use bcrypt)

Before we go into specifics, at a high level the way we add our secure protocol into the js library is:

* On their developer portal, a developer can set a flag that requires all requests to be made securely.

* The developer is then assigned a secret key, which they keep server-side.

* The developer can use this secret key to create signed policies that specify who can do what and when, and then pass this to their javascript. The javascript sends this to filestack.com, who verifies that everything checks out and sends them on their way.

The nice thing about this scheme, specifically the expiry time, is that it gives the developer control over the tradeoff of simplicity versus security. A developer can create a policy that says allow all actions for all time, and then use that everywhere but this exposes him to session-jacking. Alternately he can create a policy that is limited to a single action on a single asset for the next 5 seconds, essentially a one-time use token. But then he will have to generate tokens every time.

Secure message passing over HMAC

We decided to use HMAC. We’ll first walk through a simpler example before talking about the internals of HMAC:

The user has a message. They combine the message and the secret into a secret message, which they then hash (using md5, sha1, etc) into a signature.

To verify, one can repeat the steps and see if you can replicate the results. If the hash of the combined message and secret matches the signature that the user provides, you can be sure that the user had access to the secret.

HMAC is just a bit more complicated than this. Because the common hashing algorithms, the MD family and the SHA family, are block ciphers which process the text from left to right, combining the results, it is possible to append more characters to the end in what is called a length-extension attack. Therefore, HMAC does the hashing process twice and has no known attacks.

Security breaks when humans are involved

Humans are often the weakest link in a security protocol. The recent security breach of Wired Magazine’s Mat Honan was the human on the customer support line who was trying to be helpful, but was unwittingly helpful to the attacker. Computer viruses are most commonly installed by a user clicking on links in emails or installing unknown software. A study by a US National Laboratory found that 20% of their employees would pick up flash drives dropped outside. When plugged in, the flash drives could install software on the work machine.

Therefore, to protect against this, there are a couple steps to take:

While you can design your own security scheme, leave the specifics to the experts

For our specific use case, we designed a policy-based scheme with expiry, but at it’s core we used HMAC. After all, it’s best to defer to the people who study this for a living. Anyone who knows a bit about security and a bit about your business should be able to point you in the right direction. Similarly, don’t implement your own hashing or encryption functions

Make it simple to use and debug

It’s not secure if your customers don’t use security properly. Therefore, you have to make it simple to use and when they inevitably have problems, simple to debug as well. This is one that we are still continuing to improve on, but as a first step we provide code samples and a debug endpoint that you can test your policy on.

Engage your community

Find out what level of security is reasonable. Not everyone has the need for or discipline to enforce military grade security protocols. Sit down with your users and think about what seems reasonable.

Special shout-outs to the graduate students at MIT, YC batchmates, and HN commentators who helped us tighten up our security protocol.

So in summary, when designing a security scheme and especially for javascript, it comes down to having “secrets” and figuring out ways to use those secrets so that both parties can trust one another. Security is journey, so let us know how we can add to this to better provide simple and secure way to move files online through Filepicker.io.

Follow the discussion on HN
-Liyan Chang

Exit mobile version