Foundation models are large-scale AI models trained on massive datasets that can be adapted to perform a wide range of downstream tasks across industries.
Foundation models are powerful artificial intelligence models—such as large language models (LLMs) or multimodal models—trained on diverse and extensive datasets to perform multiple functions. Unlike traditional AI models built for a single use case, foundation models can be fine-tuned for specific applications such as customer support, predictive analytics, or creative content generation.
Prominent examples include GPT, Claude, and Gemini. These models underpin the modern AI ecosystem and serve as the “foundation” for specialized applications built on top of them.
In regulatory contexts like the EU AI Act, the relevant legal category is general-purpose AI models (GPAI models). It is critical that technologies require transparency, accountability, and governance to ensure responsible and ethical deployment.
Foundation models have reshaped how organizations develop and use AI by providing scalable, flexible tools that accelerate innovation and efficiency. They enable teams to build advanced capabilities quickly without starting from scratch.
However, their complexity and broad applicability introduce governance challenges related to explainability, bias, and data provenance. Organizations and regulators are increasingly adopting frameworks like AI governance and AI ethics to ensure these systems are developed and deployed responsibly.
Embedding governance into the use of foundation models helps balance innovation with accountability, compliance, and public trust.
OneTrust AI Governance streamlines oversight by making model selection and registration fast, consistent, and audit-ready: teams can browse, filter, sort, and search a Hugging Face–powered Model Gallery of open-source models, then add a chosen model to their AI inventory with key context automatically captured (including description, task type/type, known limitations, and bias considerations) plus a direct reference back to the source. In the same guided flow, users can link the model to the relevant project and related entities (such as the provider or vendor relationship) at creation time, so foundation models are governed as part of a connected system of record.
Foundation models are the broader category of general-purpose AI systems trained on multiple data types, while LLMs are a subset designed specifically for natural language processing tasks.
Because of their scale and influence, foundation models fall under stricter transparency and accountability requirements to ensure they do not introduce bias or systemic risk.
Organizations can implement AI impact assessments (AIIAs), data documentation, and explainability testing to promote responsible development and governance.