Our Engineering Stack

Mar 6, 2015 | Seed Engineering

This is the first in a series of posts where we’ll share details about our technology and our software engineering practices.

At Seed, one of the advantages we have over traditional banks is that we’ve built our technology stack from the ground up, with modern practices that are rarely used within the banking industry. Our goal is to provide a banking experience that will always feel contemporary, regardless of any advancement in technology or change in the global financial system. To do this, we started with our team.

Our Culture

The most important part of an engineering stack is the people, and we strive to create a work environment that enables engineers to be happy and productive. The first way we motivate our engineers is through our mission itself — to build an iconic, global, customer-centric bank as a platform. We want to work with people that enjoy solving hard problems while helping people, because that’s what we’re all about. Beyond our mission, we’re creating a culture that enables our engineers to work on their own terms, wherever they want to be, at whatever time, and with whatever tools they’re most comfortable with. For now, our team is distributed across San Francisco and Portland, but we’ll also support remote workers.

As we shared in a previous post, we believe strongly in diversity, and we’re building a culture from the start that we won’t have to apologize for later. This means that we’ll go out of our way to provide opportunities to people from marginalized communities, that we’ll ensure that our team is treating each other with love and respect, and that we’ll invest in doing the work needed to maintain our culture over time. We believe that a happy, diverse group of people working together can accomplish truly amazing things, so our investment in our culture is not just about doing what’s right for our people, it’s also what’s best for our business.

Our team members will introduce themselves in future posts, but in the meantime let’s get into some technical details.


We started Seed by building an open API. Our API is supported by a small set of backend services that collectively provide the ability to open and manage accounts, authenticate users, move money, and so on. We’ve taken an “SOA light” approach to architecting our backend around a small set of services, seeking to avoid the challenges created by a monolithic code base while steering clear of the cost and complexity we’ve seen associated with overly dogmatic SOA environments. A number of goals motivate this approach, which I’ll detail below.

Keep It Simple

First of all, we want to be able to maintain an accurate mental model of our infrastructure without relying on docs or diagrams whenever possible. We certainly believe in maintaining documentation, but we’re pragmatic enough to know that it will always be out of date and incomplete. We think that whenever we can no longer understand our systems without using references, that it’s a good time to take a step back and figure out how we can simplify. Complexity creeps in, and sometimes it’s unavoidable, but we take a deliberate and intentional approach to keeping it at bay.

Do What’s Best for the Customer

Secondly, we want our systems to operate in the best interests of our members, not according to some abstract engineering principle. We’ve all been frustrated by downtime with our providers, but few situations are more acutely anxiety inducing than being unable to access or understand one’s finances. This is doubly true in the case of a business, where a banking failure can have a material impact on the bottom line.

In an effort to embrace this reality, our systems are designed to maintain enough separation between independent business processes such that cascading failures can be avoided whenever possible. We know that everything will break eventually, so rather than trying to over-engineer against failure, we plan for it and build the tools needed to respond gracefully whenever things go wrong.

Stay Productive

Lastly, we think that a hybrid, lean approach can reduce the amount of overhead imposed on our engineers when building and maintaining our systems. While a monolithic codebase can enable rapid development early on, they become increasingly difficult to support as the team grows, code-level complexity creeps in, and refactoring is required. On the other hand, while a dogmatic SOA environment can (in theory) provide total separation of concerns and enable large teams to operate independently in parallel, in practice systems become entangled, complex dependencies arise, and overhead goes up dramatically as the number of independent services grows.

By allowing our team to use their best judgment when designing robust, customer-centric systems — rather than mandating a strict methodology — we feel that everybody wins. Our engineers are happy and productive and our members are well served by our technology.

Now that we’ve shared some of our goals for our technical architecture, let’s get into more details about our stack.



We use Go as our primary backend language. Compared to a language like Java, we prefer Go because it’s fast to write, fast to run, and easy to understand. For the most part, you can tell what’s going to run and what types you’re dealing with just by looking at a line of code. In addition, the Go ecosystem is strong and growing quickly, which means that there are usually packages available to do whatever we need to do, friendly people available to get help from within the community, and engineers eager to use the language. There is no perfect language, but we think Go is pretty great.


We use Python as our primary scripting language. Compared to Ruby, we find it easier to write, read, maintain, and learn. We believe this is a result of intentional choices made by its designers. We also prefer its comparably lightweight tooling and deployment features when compared to Ruby. Python fits with our minimalist approach to everything we do: in general, there is one right way to do things, instead of many ways to do things wrong. Many folks love Ruby, and that’s OK. We’re not trying to pick any fights, but don’t even talk to us about Perl. ☺

SageMath and R

We use SageMath and R for statistical and mathematical computing and graphics.


While we strive for simplicity, we’ll sometimes use other languages when we think they’re better suited for the work at hand. For example, Java can be easier to use when integrating with third-party vendors and legacy banking technology (think SOAP/XML, or EBCDIC, and be glad you don’t have to use ’em). There may be times when a language like Rust, Erlang, Clojure, or Haskell are best, but we strive to avoid using multiple languages just for fun. That said, we’re all polyglot programmers, and we like it that way.


We use AWS as our primary infrastructure provider. AWS gives us the power and flexibility to do anything we need to do, including building secure networks, fault-tolerant systems, and powerful yet manageable data and analytics pipelines. We use built-in AWS services whenever possible, rather than re-inventing the wheel. Some details are included below.


VPC gives us the ability to build a software-defined network that meets our strict security requirements, while also enabling us to build private network connectivity to our banking providers. It can be complex to work with at times, but it makes up for its complexity with its powerful features.


There’s not much to say here other than that EC2 provides us with all of the options we need in terms of performance, operating systems, and deployment patterns.


We store all of our static content in S3. It’s fast, has good security features, and is infinitely scalable.


Relational databases were basically invented for our use case, and Postgres on RDS is our primary backing store. By offloading the complexity of maintaining a scalable, highly available database to AWS, we get almost all of the benefits associated with NoSQL databases — but with the convenience of ACID. We prefer Postgres to MySQL for a variety of reasons, but chiefly because we think the Postgres development ethos is better, the community is stronger, and the licensing model is more open. Also, Oracle sucks.


We use the CloudFront CDN for the time being, but we’re considering switching to Fastly at some point. For now, CF is good enough, if not great.


We like to use tools that are tightly coupled within the AWS stack. So rather than writing scripts or using configuration management tools for deployment, we’re trying out CodeDeploy. It’s new, and we’ve run into some bugs, but overall we’re pretty happy with it.


The state of the art in monitoring is abysmal. Nagios is a pox on humankind. Sensu is not much better. Both require big investments in fragile configuration management that usually ends up being a full-time job for one or more engineers. No thank you. So, for now, we’re using CloudWatch as much as possible. It’s not perfect, but when combined with our approaches to designing for failure, building self-healing systems, and integrating code-level health checking into our services, it gets the job done.



Slack is awesome. Our lil slackbot groot (written in Go) is pretty cool too.


We host our code on GitHub because what else would we do?


We love using CircleCI to support our continuous integration pipeline rather than maintaining our own Jenkins setup.


Don’t hate the pager, hate the game.


We’re looking at Raygun to help us identify service-level bug/crash issues.


We’re using Librato to easily visualize data coming out of CloudWatch, health checks, and so on.


We have to keep a lot of logs, and maintaining our own setup would be a pain. Loggly makes it easy.


Some of our banking technology providers use Azure, so we may too in some cases, but for now we’re all AWS.

Open Source

We plan to open source everything that we possibly can, including some of our banking-specific software. Whenever we create services, libraries, or tools that we think will be useful to others, we’ll share them. This approach is motivated by our core values of community, transparency, and excellence.

We’re standing on the shoulders of giants, and we’re moved by the truly amazing amount of work that the FOSS community has done. By being participants in the software ecosystem rather than parasites, we hope that we’ll contribute to the community and help those that come after us get things done more easily. You’ll be able to find our open source contributions here:


We’re unable to share very many details about our specific approach to maintaining security and data privacy, but our philosophy is to go well beyond industry norms in everything that we do. We bake security into everything by default. We build our services with the expectation that they will be attacked constantly, both internally and externally, and we plan ahead so that we have mitigations available for problems if they arise.

Multi-Factor Authentication

We support MFA in our API and clients. In some cases, multiple levels of authentication from multiple individuals will be required to authorize certain payment flows or enable certain functionality. Even when these requirements are imposed, we’ll use integrated approaches across mobile, web, and offline to make things as easy as possible for our members.

Role-Based Access Control

RBAC provides for better security while also enabling a better customer experience. By supporting multiple roles and logins that can be used to manage a single account with us, we’ll ensure that the most sensitive or high-risk tasks are tightly controlled, while enabling ease of use for the community of folks that support a business. Whether you need to provide limited read-only access or the keys to the castle, you’ll have that ability.


We’re using NaCl as our crypto framework. Cryptography is hard to implement correctly, but the algorithms and tools used in NaCl allow us to get on with our job of building great software without worrying about low-level crypto details.


We use KMS for key management as part of our approach to securing our infrastructure, services, and data. Being able to use an HSM in the cloud is a huge convenience for us, and this is one of the features that we feel really differentiates AWS from its competition.


You can look forward to future posts from our engineering team, but if you’d like to learn more about us now, check out our previous posts on our mission and our philosophy.

If you’d like to work with us, check out our job listings here.