A few thoughts on Seer


Seer is a Proteomics sample prep company. They IPO’d earlier this year. Prior to that they appear to have raised $169.5M. Having been founded in 2017, this must have been a quick exit for their investors. Seer was founded by Omid Farokhzad (CEO) and Philip Ma (SBO). It looks like Philip left at the end of 2020.

Revenue for Q1 2021 was $62000. Operating expenses were $16.6M. They had $531M in the bank. They say they are working with “limited release customers” (essentially early access) and aiming for inital placements in the “high single digit”.

Looking over Omid’s patents, it’s clear he has a deep background in nanoparticles as applied to various life science problems in particular to targeted delivery in therapeutics. This suggests to me that Seer’s work was a development of his background in nanoparticle applications.

It feels like Seer mostly targets life science tools applications, at least initially. But one user talks about the potential of the approach for early stage cancer diagnostics.

Like most next-gen Proteomics they draw parallels between Genomics and Proteomics markets.


I classify Seer as a Proteometics sample prep company. They take a Protein sample, and apply a prep which compresses the dynamic range. This is important as proteins cover a range of “10 orders of magnitude”. This huge dynamic range can also be used to justify Nautilus’ need for 10 billion wells.

The assumption therefore is that the samples will follow this power-law like distribution where a few proteins are very common. And that if you sequence a small number of proteins, you’ll likely only ever see a high abundance subset. This generally appears to be the case for human plasma.

Methods therefore exist to either remove or separate out high abundance proteins using various approaches prior to analysis using mass spectrometry.

Seer aims to provided a better approach to compressing down high abundance proteins. As one users says “…you could get at that depth but it would be with one sample with a tremendous amount of sample…” and that Seer “for the first time makes a study with 500 to 1000 cases possible”.

In the Seer approach they expose proteins to 5 SPIONs (super-paramagnetic iron oxide nano particles). These are functionalized such that a subset of proteins attaches to each SPION. The image below shows show how each nano particle is supposed to sample a subset of proteins across the dynamic range:

How SPIONs Work

From their publication/patent it looks like the 5 SPIONs are: SP-003 (Silca-coated), SP-006 (amine functionalized?), SP-007 (PDMAPMA), SP-333, and SP-339. There doesn’t seem to be any information on what 333 and 339 are. But the patent refers to Ubiquitin (S-164-001) and Dextran (P-073) functionalized particles.

The idea is that each particle type will bind specifically to some subset of of proteins. You’ll then also get secondary binding of other proteins to these initial proteins, creating a kind of cloud (corona) around the particle.

What’s missing from the publication and patent (at least for me) is any clear reasoning as to why particles should bind specifically. In fact, it seems to me that the 3 known particles don’t bind very specifically... The patent shows a plot of protein groups (PGs) against sample mass:

S-003, S-006 and S-007 which all have relatively simple, small molecule, surface functionalization appear to show relatively low numbers of protein groups. The others particles, functionalized with proteins appear to show much higher PG counts.

My guess is they use a mix of particles functionalized with small molecules and macromolecules. They says that for small molecules “it is unlikely that a positive or negative charge alone favors higher protein yield” but it seems likely to me that it’s some relatively basic feature of the protein (like charge), combined with secondary protein-protein binding would explain most of the small molecule protein yield.

So, for the small molecules, they sample the total proteome somewhat preferentially binding to a large subset of proteins. Because of this, they’ll largely end up binding to a subset of high abundance proteins. Just because the high abundance proteins are more common.

Secondary interactions then help them sample a few lower abundance proteins. But, if for whatever reason the proteome was completely different and those high abundance proteins were not present, you’d get completely different results (fortunately, if you’re looking at human samples, I think this is pretty unlikely). One way to think about this perhaps is that the high abundance proteins are a form of functionalization which binds lower abundance proteins.

The macromolecular SPIONs are likely far more specific. For example, the Ubiquitin functionalized SPION probably pulls out a number of Ubiquitin interacting proteins and related secondary binding (about 5% of genes are Ubiquitin related, so this is likely some decently large subset of the proteome).

Using the Platform

The platform/kits seem to automate the entire mass spec sample prep process, including parts of the process common to Seer’s and other approaches. I guess the idea is to make the platform as turnkey as possible:

The Seer instrument seems like a pretty generic liquid handling box:

Setup time is 30mins, with a 7h run time (after which you need to run mass spec).


Of the “next-gen proteomics” companies Seer seems the nearest to market. But it’s also very different, the quality of the data that the Seer approach generates isn’t really very different to existing approaches. What they seem to have done is simplify the prep and automate much of the workflow.

I’d be interested in seeing how the Seer prep would work with a platform like QuantumSi’s. As QuantumSi have relatively low throughput, could the Seer prep help push QuantumSi into high throughput applications?

I have a few concerns about the basic approach Seer approach. The first being is the linearity between samples as good as competing approaches? Specifically the fact that they rely on secondary interactions is a concern. But I suspect this is fine in practice, and further pilot studies should help confirm this.

Overall, it seems like an interesting approach. I’d compare it to what 10X does in genomics. A fairly low technical risk tool, here targetted at proteomics applications. However it will be interesting to see if one of the next-gen protein sequencing/fingerprinting companies produces something that completely disrupts the market.