DARWIN Series: Domain Specific Large Language Models for Natural Science

by   Tong Xie, et al.

Emerging tools bring forth fresh approaches to work, and the field of natural science is no different. In natural science, traditional manual, serial, and labour-intensive work is being augmented by automated, parallel, and iterative processes driven by artificial intelligence-based experimental automation and more. To add new capabilities in natural science, enabling the acceleration and enrichment of automation of the discovery process, we present DARWIN, a series of tailored LLMs for natural science, mainly in physics, chemistry, and material science. This series relies on open-source LLM, incorporating structured and unstructured scientific knowledge from public datasets and literature. We fine-tuned the models using over 60,000 instruction data points, emphasizing factual correctness. During the fine-tuning, we introduce the Scientific Instruction Generation (SIG) model, automating instruction generation from scientific texts. This eliminates the need for manual extraction or domain-specific knowledge graphs and efficiently injects scientific knowledge into the model. We also explore multi-task training strategies, revealing interconnections between scientific tasks. DARWIN series not only achieves state-of-the-art results on various scientific tasks but also diminishes reliance on closed-source AI models. Our research showcases the ability of LLM in the scientific domain, with the overarching goal of fostering prosperity within the broader AI for science community.


WizardCoder: Empowering Code Large Language Models with Evol-Instruct

Code Large Language Models (Code LLMs), such as StarCoder, have demonstr...

Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators

Large language models that exhibit instruction-following behaviour repre...

Fine-tuning Large Enterprise Language Models via Ontological Reasoning

Large Language Models (LLMs) exploit fine-tuning as a technique to adapt...

Galactica: A Large Language Model for Science

Information overload is a major obstacle to scientific progress. The exp...

SAIBench: Benchmarking AI for Science

Scientific research communities are embracing AI-based solutions to targ...

Domain-specific ChatBots for Science using Embeddings

Large language models (LLMs) have emerged as powerful machine-learning s...

Please sign up or login with your details

Forgot password? Click here to reset