BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Brain Cells From Stem Cells Train Machine To ID Neurotoxins

This article is more than 8 years old.

Credit: PNAS

For obvious reasons, we need to know if chemicals we encounter are neurotoxic to us at any time of life. Also as obvious, testing that neurotoxicity through deliberate exposure of human brains would be the nadir of unethical scientific experimentation.

One way to solve the problem is to grow a reasonable facsimile of a brain in a dish. Of course, “reasonable” and “facsimile” can carry their own set of ethical questions, especially if you’re one who ascribes to the conceptualization of a human being as a “pack of neurons” acted on by and responding to its environment. No one's developed anything like that yet.

But Michael Schwartz, Ph.D., a scientist in the department of biomedical engineering at the University of Madison–Wisconsin, and his co-authors have developed a three-dimensional brain model in a dish, one that’s distinguished from other such models by the fact that it is indeed more than a pack of neurons. It's not anywhere near being a real brain, but it can engage in some of the basics of interactions of the cells that form the brain.

In fact, it incorporates several cell types that work together to support and protect the neurons, including microglia, which play a role in immunity, and cells that behave like those that make blood vessels in the early brain. The entire construct develops on a watery gel scaffolding containing molecules that, conductor like, help to orchestrate the different cell types into developing and interacting like they might in an actual brain.

Once the cell types were in place and networking, the researchers exposed them to different chemicals to see how the cells responded at the gene level, determining which genes the cells used more ... or less. These signatures of gene response to toxic or non-toxic chemicals could then be compared to the cells' response to chemicals of unknown toxicity to see if they show non-toxic or toxic patterns.

Giving whole new meaning to "brain training," Schwartz and colleagues trained a machine on a set of 60 known chemicals to recognize the toxic and nontoxic gene response patterns. They found that their complete screening process could correctly identify 9 out of every 10 chemicals as toxic based on the signatures the machine learned.

Schwartz and co-lead author Zhonggang Hou of the Morgridge Institute (and now at Harvard University) and their colleagues have now opened the door—and the human brain—to a rapid, fairly accurate way of screening for the neurotoxicity of chemicals, a significant step. A screening tool like this can serve as a connecting step and filter between studies on the same cell type in a dish and studies on whole animals, like rodents.

In addition, the construct could have implications for understanding how brains are built and for drug research and development. Because the work, published today in the Proceedings of the National Academy of Sciences, looks like a Pretty Big deal, I reached out to Schwartz by email with a few questions about it.

EJW: The system you've developed relies on four elements to generate a screen for chemicals that are potentially harmful to the developing human brain. Would you be able to give a brief explanation about the contribution each element to your system? 

MS: Human pluripotent stem cells were used to derive “precursor” cell types representing distinct components of the developing human brain that have been implicated in developmental neurotoxicity, including (1) neural progenitor cells (which differentiate into neurons and glial cells), (2) vascular cells, and (3) microglia (the specialized immune cell of the brain). Previous studies have demonstrated that human pluripotent stem cells will spontaneously self-assemble into “organoids” with features that resemble the developing brain, but ours is the first to include organized vascular networks and microglia.

The precursor cells were cultured on synthetic hydrogels (which are polymers with high water content) that incorporated peptides to promote cellular self-assembly into 3D neural tissues. Such strategies are common in tissue engineering, but are only beginning to be applied to organoid culture models.

RNA-sequencing was used to determine gene expression within the neural constructs, which includes information about the cellular interactions represented by more than 19,000 genes.

Machine learning uses a computational algorithm that identifies a “signature” for toxicity by comparing gene expression for neural tissue constructs exposed to a training set of known toxic and non-toxic chemicals. The machine learning algorithm can then predict toxicity by determining if a neural tissue construct that is exposed to an unknown chemical has a toxic or non-toxic gene expression signature.

EJW: What drove your selection of chemicals for testing? Which ones out of those you tested evaded identification as harmful when they are known to be? Why do you think those escaped detection?

MS: The toxic chemicals were chosen based on previous literature, including several from a list of neurotoxins established by the Environmental Protection Agency. Non-toxic controls included common food additives or other chemicals that are not suspected to be neurotoxic. The machine learning model developed using the training set correctly identified 9 out of 10 blinded chemicals, with the one incorrect prediction being a “false positive.” In other words, (in that case), the machine learning model used to make blinded predictions incorrectly predicted that a non-toxic control chemical was toxic.

Some toxic chemicals in the training set were incorrectly classified as non-toxic, which may be due to several factors. One potential source for error is the small size of the training set (60 chemicals) compared to the number of data points being analyzed (>19000 genes), which can be problematic for machine learning algorithms. We expect that the machine learning algorithm will be more robust as additional training chemicals are added to account for toxicity mechanisms that are not represented by the 34 chemicals in the current training set.

Another possible source of error is our choice of chemicals and the dosing concentrations used to generate the predictive model. If we dose a chemical at a concentration that has an unintended effect, then the machine learning algorithm may make a proper assessment.

For example, oleic acid was a non-toxic control chemical that was predicted (by the machine) to be toxic, making it a false positive. The concentration for oleic acid was lower than reported human blood serum concentrations. However, oleic acid may not pass the blood–brain barrier, and thus (real-life) concentrations in the brain may be much lower.  Therefore, the dose (we used) may have actually been toxic, and we simply misclassified oleic acid in our training set, in which case the machine learning algorithm may have made a proper assessment. We describe these limitations in some detail in the paper.

EJW: How does your system compare to any other in vitro screening methods for neurotoxicity that are already available (e.g., in terms of sophistication, rapidity, high throughput, costs)?

MS: I’m not sure I feel comfortable trying to compare our approach to other platforms, especially since there are so many different approaches out there and each has different intended applications. But, we are excited that we were able to achieve a high level of sophistication without sacrificing sample uniformity. The 280 neural tissue constructs used for toxicity screening were formed by hand using standard culture techniques, and we are currently in the process of automating our procedure to expand the throughput even further. Though RNA-sequencing is expensive compared to some analysis techniques, costs have dramatically decreased over the years. We recently published an in-house protocol to further reduce RNA-sequencing costs.

EJW: What are some potential applications of this system? 

MS: In addition to toxicity screening applications, we are hopeful that our protocols will expand the potential for applying neural organoid technologies as a discovery tool by allowing systematic and quantitative assessment of factors that influence human brain development. The model may be useful for drug discovery by enabling more sophisticated approaches for identifying mechanisms that can be targeted for therapeutic intervention, and should provide a valuable tool for investigating developmental mechanisms specific to human physiology.

EJW: I'm assuming that like all in vitro screens, even one this sophisticated, any results from this tool would be preliminary or a starting point indicating further testing, given the obvious caveats related to all in vitro work (i.e., no womb, no organism, no detox systems, etc.). It's unlikely, yes, that any developing brain would be directly bathed in potentially harmful chemicals or even exposed to a single compound at a time?

MS: We use oleic acid as an example for why dosing the neural tissue constructs directly may lead to errors in predicting toxicity. Briefly, blood serum concentrations were used to determine dosing for oleic acid, but animal studies suggest that oleic acid does not pass the blood–brain barrier. The blood–brain barrier is a specialized property of blood vessels in the central nervous system that protects sensitive nervous tissue by preventing many molecules in blood from entering the brain.  Thus, dosing directly onto the neural tissue may lead to toxicities that were unexpected, since brain concentrations may be much lower than blood serum concentrations. Though we were able to induce vascular network formation within our model neural tissue, we do not have the capacity to deliver molecules through these networks.

Follow me on LinkedInCheck out my website