Documentation

Security

We've been working with and amongst IT services for a really long time. We've all learned the importance of holding our black hat adversaries in the highest regard. They are, to say the least, highly adept at what they do. And while none of us consider ourselves to be security experts, in the strictest sense of the term, we do know a thing or two about how to protect our systems from bad actors.

Mind you, and lest it go unsaid, the best way for you to protect your Smarter AI resources from bad guys is manage your Api Keys responsibly. But with that said, we've taken a few steps of our own to help keep a tight lid on things.

Smarter Has a Small Attack Surface

The best way to keep lock pickers at bay is to simply not have doors unless you absolutely, positively must. In the case of Smarter, by 'doors' we mean ports. Our entire infrastructure platform is limited to port 443, plus an IP-restricted port 22 for extremely limited SSH access. That's it. No other ports are open to the public.

All Smarter Services Run Behind a Proxy

All of our services run behind a proxy. This means that the only way to access our services is through the proxy. The proxy is responsible for routing requests to the appropriate service, and it also handles SSL termination. This means that all traffic to and from our services is encrypted, and where applicable, we're able to apply specialize middlewares as countermeasures to common intrusion strategies.

All Infrastructure Uses Service Accounts

All of our infrastructure services run as service accounts. This means that the services themselves do not have direct access to the underlying infrastructure. Instead, they must authenticate using a service account, on each request. This is a best practice for securing infrastructure, as it limits the potential damage that can be done by a compromised service.

We Enforce a Credential Rotation Policy

We enforce a credential rotation policy for all of our services. This means that all of our services are required to rotate their credentials on a prescheduled periodic basis. This is a best practice for securing infrastructure, as it limits the potential damage that can be done by a compromised credential. We also enforce a policy of least privilege, meaning that services are only given the permissions that they need to do their job, and nothing more.

Few Humans Have Access

Few humans have access to our infrastructure. This is a best practice for securing infrastructure, as it limits the potential damage that can be done by a compromised human. We also enforce a policy of least privilege, meaning that humans are only given the permissions that they need to do their job, and nothing more. As odd as it sounds, the lead engineer of the Smarter platform doesn't even have root access to the production environment. In fact, no human does.

Common Sense Countermeasures

We've implemented a number of common sense countermeasures to help protect our infrastructure from bad actors. These include rate limiting, IP blacklisting, and other measures. We also monitor our infrastructure for unusual activity, and we have a response plan in place in case of a security incident. We take security very seriously, and we're constantly working to improve our security posture. Other measures include:

  • Rate limiting
  • IP blacklisting
  • Monitoring for unusual activity
  • CORS Policy
  • CSRF Protection
  • SQL Injection Protection
  • XSS Protection
  • Clickjacking Protection
  • Content Security Policy
  • HTTP Strict Transport Security
  • Referrer Policy

While we'd love to take credit for some, or all of these, the truth is that most of these are built into the Django framework that we use to build our services. But we're happy to take credit for having done the hard work of building everything else!

Our Persistence Services Are Behind VERY Thick Service Layers

The nature of Django's ORM is such that it's overwhelmingly challenging to write raw SQL queries via any strategy, let alone via a tiny tunnel like what SQL injection affords. This is good, because it means that we're not likely to be affected by SQL injection attacks, in the highly unlikely event that we were to be targeted by one, and it miraculously succeeded. But we're not taking any chances. We've also implemented a number of other common sense countermeasures. For example, we use different logical databases for different services, each with its own credentials and permissions.

We're Reasonably Well Protected From DDoS Attacks

Not so much because of anything we ourselves did, but simply because of our infrastructure choices. Our port 443 listens from a large cloud-based load balancer, which is designed to absorb DDoS attacks. Additionally, our raw traffic routes to a cloud-based WAF, which is also designed to absorb DDoS attacks. We also have a number of other countermeasures in place, such as rate limiting, IP blacklisting, and other measures. Lastly, our Kubernetes ingress controller is configured both a.) scale itself to meet demand, and b.) to automatically block IP addresses that are sending too many requests.

Again, we're definitely not trying to blow our own horn. All of this is made possible by our technology vendor partners more so than anything we ourselves did.

We're 'Pretty Close' To Zero Trust

We still have room to improve on this, but, from the onset we've strived to implement a zero trust security model in our core infrastructure. We've implemented dedicated VPN subnets and security groups for each service, and we've implemented a number of other common sense countermeasures. The point to this is to, as much as possible, make every intrusion pathway a dead end for the would-be intruder.