For all the hate that Google (rightly) gets for some of their work in other domains, I appreciate that they continue to put major resources behind using AI to try and save lives in medicine and autonomous driving.
Easy to take for granted, but their peer companies are not doing this type of long term investment.
From what I understand, the model was used to broaden a search that was already conducted by humans. It's not like the model has devised new knowledge. Kind of a low hanging fruit. But question is: how many of these can be reaped ? Hopefully a lot!
("low hanging fruit", well, not the right way to put it, Google's model are not exactly dumb technology)
Remarkably some claim AI has now discovered a new drug candidate on its own. Reading the prep-print (https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2....), it appears the model was targeted to just a very specific task and without evaluating other models on the same task. I know nothing about gens, and I can see that is an important advance. However, seems a bit headline grabbing when claiming victory for one model without comparing against others using the same process.
But what I’ll say is, ideally they would demonstrate whether this model can perform any better than simple linear models for predicting gene expression interactions.
We’ve seen that some of the single cell “foundation” models aren’t actually the best at in silico perturbation modeling. Simple linear models can outperform them.
So this article makes me wonder: if we take this dataset they’ve acquired, and run very standard single cell RNA seq analyses (including pathway analyses), would this published association pop out?
My guess is that yes… it would. You’d just need the right scientist, right computational biologist, and right question.
However, I don’t say this to discredit the work in TFA. We are still in the early days of scSeq foundation models, and I am excited about their potential.
I am concerned about this kind of technology being used to circumvent traditional safeguards and international agreements that prevent the development of biological weapons.
Well you might be pleased to know that there are large safety teams working at all frontier model companies worried about the same thing! You could even apply if you have related skills.
I mean.. they work within the legal frameworks of very large corporations with nation state engagement. It's not like they're autonomous anonymous DAOs
Hi! I work directly on these teams as a model builder and have talked to my colleagues are the other labs well.
All our orgs have openings and if you also could consider working for organizations such as the UK AISI team and other independent organizations that are assessing these models. It's a critical field and there is a need for motivated folks.
Seems like no matter how positive the headline about the technology is, there is invariably someone in the comments pointing out a worst case hypothetical. Is there a name for this phenomenon?
Not believing everything you read on the internet? Being jaded from constant fluff and lies? Not having gell-mann amnesia?
I get your sentiment of "why you gotta bring down this good thing" but the answer to your actual question is battle scars from the constant barrage of hostile lies and whitewashing we are subject to. It's kind of absurd (and mildly irresponsible) to think "THIS time will be the time things only go well and nobody uses the new thing for something I don't want".
We’ve just had a virus - specifically engineered to be highly infectious for humans - escaping the lab (which was running very lax safety level - BSL2 instead of required BSL4) and killing millions and shutting down half the globe. So I’m wondering what safeguards and prevention you’re talking about :)
You're trying to deflect the discussion into a polemic tarpit. That's not going to work.
I do not endorse the view that covid was engineered. Also, I consider it to be unrelated to what I am concerned about, and I will kindly explain it to you:
Traditional labs work with the wet stuff. And there are a lot of safeguards (the levels you mentioned didn't came out of thin air). Of course I am in favor of enforcing the existing safeguards to the most ethical levels possible.
However, when I say that I am concerned about AI being used to circumvent international agreements, I am talking about loopholes that could allow progress in the development of bioweapons without the use of wet labs. For example, by carefully weaving around international rules and doing the development using simulations, which can bypass outdated assumptions that didn't foresaw that this could be possible when they were conceived.
This is not new. For example, many people were concerned about research on fusion energy related to compressing fuel pellets, which could be seen as a way of weaving around international treatises on the development of precursor components to more powerful nuclear weapons (better triggers, smaller warheads, all kinds of nasty things).
>For example, by carefully weaving around international rules and doing the development using simulations, which can bypass outdated assumptions that didn't foresaw that this could be possible when they were conceived.
Covid development in Wuhan was exactly a careful weaving - by means of laundering through EcoHealth - around the official rule of "no such dangerous GoF research on US soil". Whether such things weaved away offshore or into virtual space is just minor detail of implementation.
This myth is documented in the EcoHealth Alliance publicly available NIH and DARPA grants documents among others. Wrt your link - Wikipedia unfortunately isn’t subject to the law like those grants.
Covid is irrelevant to the discussion I opened. You're trying to steer the discussion into a place that will lead us nowhere because there's too many artificial polemics around it.
The only thing to be said about it that resonates with what I'm concerned with is that anyone that is good in the head wants better international oversight on potential bioweapons development.
For all the hate that Google (rightly) gets for some of their work in other domains, I appreciate that they continue to put major resources behind using AI to try and save lives in medicine and autonomous driving.
Easy to take for granted, but their peer companies are not doing this type of long term investment.
From what I understand, the model was used to broaden a search that was already conducted by humans. It's not like the model has devised new knowledge. Kind of a low hanging fruit. But question is: how many of these can be reaped ? Hopefully a lot!
("low hanging fruit", well, not the right way to put it, Google's model are not exactly dumb technology)
Remarkably some claim AI has now discovered a new drug candidate on its own. Reading the prep-print (https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2....), it appears the model was targeted to just a very specific task and without evaluating other models on the same task. I know nothing about gens, and I can see that is an important advance. However, seems a bit headline grabbing when claiming victory for one model without comparing against others using the same process.
If someone discovers anything, it does not change anything if someone else could have discovered it theoretically as well?
Meanwhile OpenAI going into the porn business
If you can have porn without the human trafficking and exploitation associated with the porn industry, that's a big win too
Their research is pivoting to STDs
Both is equally important, just in another dimension.
This is so awesome. Hoping those in the biology field can comment on the significance.
It is awesome.
But what I’ll say is, ideally they would demonstrate whether this model can perform any better than simple linear models for predicting gene expression interactions.
We’ve seen that some of the single cell “foundation” models aren’t actually the best at in silico perturbation modeling. Simple linear models can outperform them.
So this article makes me wonder: if we take this dataset they’ve acquired, and run very standard single cell RNA seq analyses (including pathway analyses), would this published association pop out?
My guess is that yes… it would. You’d just need the right scientist, right computational biologist, and right question.
However, I don’t say this to discredit the work in TFA. We are still in the early days of scSeq foundation models, and I am excited about their potential.
Let's go !!!
Hell yeah
I am concerned about this kind of technology being used to circumvent traditional safeguards and international agreements that prevent the development of biological weapons.
Well you might be pleased to know that there are large safety teams working at all frontier model companies worried about the same thing! You could even apply if you have related skills.
I thought openAI gave up on safety when Anthropic splintered off as well as when they engaged ScaleAI to traumatize people for RLHF?
Or Google when they fired Timnit?
The GPT5 system card is 59 pages. Pages 5 to 56 address safety in various forms.
https://cdn.openai.com/gpt-5-system-card.pdf
Are these safety teams subject to the oversight of more estabilished international agreements and safeguards?
I mean.. they work within the legal frameworks of very large corporations with nation state engagement. It's not like they're autonomous anonymous DAOs
Hi! I work directly on these teams as a model builder and have talked to my colleagues are the other labs well.
All our orgs have openings and if you also could consider working for organizations such as the UK AISI team and other independent organizations that are assessing these models. It's a critical field and there is a need for motivated folks.
That does not answer my question.
Seems like no matter how positive the headline about the technology is, there is invariably someone in the comments pointing out a worst case hypothetical. Is there a name for this phenomenon?
Rational discourse? Not working for a marketing team? Realism?
Not believing everything you read on the internet? Being jaded from constant fluff and lies? Not having gell-mann amnesia?
I get your sentiment of "why you gotta bring down this good thing" but the answer to your actual question is battle scars from the constant barrage of hostile lies and whitewashing we are subject to. It's kind of absurd (and mildly irresponsible) to think "THIS time will be the time things only go well and nobody uses the new thing for something I don't want".
Pessimism?
We’ve just had a virus - specifically engineered to be highly infectious for humans - escaping the lab (which was running very lax safety level - BSL2 instead of required BSL4) and killing millions and shutting down half the globe. So I’m wondering what safeguards and prevention you’re talking about :)
You're trying to deflect the discussion into a polemic tarpit. That's not going to work.
I do not endorse the view that covid was engineered. Also, I consider it to be unrelated to what I am concerned about, and I will kindly explain it to you:
Traditional labs work with the wet stuff. And there are a lot of safeguards (the levels you mentioned didn't came out of thin air). Of course I am in favor of enforcing the existing safeguards to the most ethical levels possible.
However, when I say that I am concerned about AI being used to circumvent international agreements, I am talking about loopholes that could allow progress in the development of bioweapons without the use of wet labs. For example, by carefully weaving around international rules and doing the development using simulations, which can bypass outdated assumptions that didn't foresaw that this could be possible when they were conceived.
This is not new. For example, many people were concerned about research on fusion energy related to compressing fuel pellets, which could be seen as a way of weaving around international treatises on the development of precursor components to more powerful nuclear weapons (better triggers, smaller warheads, all kinds of nasty things).
>For example, by carefully weaving around international rules and doing the development using simulations, which can bypass outdated assumptions that didn't foresaw that this could be possible when they were conceived.
Covid development in Wuhan was exactly a careful weaving - by means of laundering through EcoHealth - around the official rule of "no such dangerous GoF research on US soil". Whether such things weaved away offshore or into virtual space is just minor detail of implementation.
Still irrelevant to what I brought up.
Don't spread misinformation. This myth is widely believed only by Americans.
https://en.wikipedia.org/wiki/COVID-19_misinformation#Virus_...
This myth is documented in the EcoHealth Alliance publicly available NIH and DARPA grants documents among others. Wrt your link - Wikipedia unfortunately isn’t subject to the law like those grants.
Covid is irrelevant to the discussion I opened. You're trying to steer the discussion into a place that will lead us nowhere because there's too many artificial polemics around it.
The only thing to be said about it that resonates with what I'm concerned with is that anyone that is good in the head wants better international oversight on potential bioweapons development.