Key Takeaways from this newsletter:
The automation of laboratory instruments and workflows is accelerating. In March of this year, Covestro announced plans to launch an automated and nearly autonomous laboratory dedicated to developing binders and crosslinkers for coatings and adhesives.(1) This cutting-edge facility will operate 24/7, conducting tens of thousands of tests annually. The structured data generated will feed into machine learning algorithms, optimizing formulations and guiding experiment selection when formulation objectives remain unmet.
So, how is Covestro accomplishing this? While details remain unclear, one can speculate.
Chemspeed, a company specializing in custom automation platforms for R&D and quality control labs, features Covestro’s announcement on its website, suggesting that it provided the automation hardware.(2) Additionally, Uncountable, a platform for data collection, management, and analysis, including AI-driven model generation, counts Covestro among its customers.(3) Given these connections, it is reasonable to assume that Covestro’s automated laboratory is leveraging Chemspeed and Uncountable technologies, though this remains speculative.
Ultimately, integrating automation with a laboratory informatics package that centralizes data while incorporating visualization and modeling tools is the optimal approach for ensuring FAIR data generation.
What Is FAIR Data and Why Is It Important?
FAIR data adheres to four key principles: Findability, Accessibility, Interoperability, and Reusability.(4)
First outlined in a 2016 Scientific Data paper(5), FAIR data principles emerged in response to the growing volume of laboratory data requiring computational analysis. Data formatted according to FAIR principles is readily available for efficient machine processing.
Automation and FAIR Data: A Synergistic Relationship
Laboratory automation has long preceded the FAIR data movement. Traditionally, automation aimed to alleviate labor-intensive and repetitive tasks, reducing costs while improving efficiency, speed, and consistency.
However, as laboratory informatics systems evolve to centralize data and generate predictive models, FAIR data becomes a necessity. Faulty data undermines algorithmic reliability, resulting in inaccurate models. While laboratory informatics systems address Findability, Accessibility, Interoperability, and Reusability, ensuring Reusability remains a scientist-driven challenge. Variability in techniques among researchers can limit data consistency, making automation essential to guaranteeing that formulation and testing procedures remain uniform. In essence, automation ensures the "R" in FAIR is upheld.
Ensuring FAIR Data with Automation
A state-of-the-art laboratory that combines automation with an informatics package supporting centralized data, visualization, and modeling tools is the best route to generating FAIR data. FAIR data, in turn, optimizes visualization and enhances machine learning training datasets, leading to more robust computational models.
#FAIRdata #FAIR #automation #AI #artificialintelligence #machinelearning
(1)Covestro Launches Automated Lab for Coatings & Adhesives
(3)Customer Spotlight: Covestro Uses Uncountable for Optical Fiber Coating Product Development
(5) The FAIR Guiding Principles for scientific data management and stewardship - PMC
(photo credit: thisisengineering, pexels.com)
Contact us to explore how our consulting services can enhance your business processes and drive growth.
Copyright © 2025 Polaris Chemical Consulting LLC. All rights reserved.
We need your consent to load the translations
We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.