Upgrade on Docker
To run OpenMetadata with Docker, you can simply download the docker-compose.yml
file. Optionally, we added some Named Volumes to handle data persistence.
You can find more details about Docker deployment here
Below we have highlighted the steps needed to upgrade to the latest version with Docker. Make sure to also look here for the specific details related to upgrading to 1.0.0
Prerequisites
Everytime that you plan on upgrading OpenMetadata to a newer version, make sure to go over all these steps:
Backup your Metadata
Before upgrading your OpenMetadata version we strongly recommend backing up the metadata.
The source of truth is stored in the underlying database (MySQL and Postgres supported). During each version upgrade there is a database migration process that needs to run. It will directly attack your database and update the shape of the data to the newest OpenMetadata release.
It is important that we backup the data because if we face any unexpected issues during the upgrade process, you will be able to get back to the previous version without any loss.
You can learn more about how the migration process works here.
During the upgrade, please note that the backup is only for safety and should not be used to restore data to a higher version.
Since version 1.4.0, OpenMetadata encourages using the builtin-tools for creating logical backups of the metadata:
For PROD deployment we recommend users to rely on cloud services for their databases, be it AWS RDS, Azure SQL or GCP Cloud SQL.
If you're a user of these services, you can leverage their backup capabilities directly:
You can refer to the following guide to get more details about the backup and restore:
Update sort_buffer_size
(MySQL) or work_mem
(Postgres)
Before running the migrations, it is important to update these parameters to ensure there are no runtime errors. A safe value would be setting them to 20MB.
If using MySQL
You can update it via SQL (note that it will reset after the server restarts):
To make the configuration persistent, you'd need to navigate to your MySQL Server install directory and update the my.ini
or my.cnf
files with sort_buffer_size = 20971520
.
If using RDS, you will need to update your instance's Parameter Group to include the above change.
If using Postgres
You can update it via SQL (not that it will reset after the server restarts):
To make the configuration persistent, you'll need to update the postgresql.conf
file with work_mem = 20MB
.
If using RDS, you will need to update your instance's Parameter Group to include the above change.
Note that this value would depend on the size of your query_entity
table. If you still see Out of Sort Memory Error
s during the migration after bumping this value, you can increase them further.
After the migration is finished, you can revert this changes.
New Versioning System for Ingestion Docker Image
We are excited to announce a recent change in our version tagging system for our Ingestion Docker images. This update aims to improve consistency and clarity in our versioning, aligning our Docker image tags with our Python PyPi package versions.
Ingestion Docker Image Tags
To maintain consistency, our Docker images will now follow the same 4-digit versioning system as of Python Package versions. For example, a Docker image version might look like 1.0.0.0
.
Additionally, we will continue to provide a 3-digit version tag (e.g., 1.0.0
) that will always point to the latest corresponding 4-digit image tag. This ensures ease of use for those who prefer a simpler version tag while still having access to the most recent updates.
Benefits
Consistency: Both Python applications and Docker images will have the same versioning format, making it easier to track and manage versions. Clarity: The 4-digit system provides a clear and detailed versioning structure, helping users understand the nature and scope of changes. Non-Breaking Change: This update is designed to be non-disruptive. Existing Ingestions and dependencies will remain unaffected.
Example
Here’s an example of how the new versioning works:
Python Application Version: 1.5.0.0
Docker Image Tags:
1.5.0.0
(specific version)1.5.0
(latest version in the 1.5.0.x series)
We believe this update will bring greater consistency and clarity to our versioning system. As always, we value your feedback and welcome any questions or comments you may have.
Backward Incompatible Changes
1.5.0
Multi Owners
OpenMetadata allows a single user or a team to be tagged as owners for any data assets. In Release 1.5.0, we allow users to tag multiple individual owners or a single team. This will allow organizations to add ownership to multiple individuals without necessarily needing to create a team around them like previously.
This is a backward incompatible change, if you are using APIs, please make sure the owner field is now changed to “owners”
Import/Export Format
To support the multi-owner format, we have now changed how we export and import the CSV file in glossary, services, database, schema, table, etc. The new format will be user:userName;team:TeamName
If you are importing an older file, please make sure to make this change.
Pydantic V2
The core of OpenMetadata are the JSON Schemas that define the metadata standard. These schemas are automatically translated into Java, Typescript, and Python code with Pydantic classes.
In this release, we have migrated the codebase from Pydantic V1 to Pydantic V2.
Deployment Related Changes (OSS only)
./bootstrap/bootstrap_storage.sh
removed
OpenMetadata community has built rolling upgrades to database schema and the data to make upgrades easier. This tool is now called as ./bootstrap/openmetadata-ops.sh and has been part of our releases since 1.3. The bootstrap_storage.sh
doesn’t support new native schemas in OpenMetadata. Hence, we have deleted this tool from this release.
While upgrading, please refer to our Upgrade Notes in the documentation. Always follow the best practices provided there.
Database Connection Pooling
OpenMetadata uses Jdbi to handle database-related operations such as read/write/delete. In this release, we introduced additional configs to help with connection pooling, allowing the efficient use of a database with low resources.
Please update the defaults if your cluster is running at a large scale to scale up the connections efficiently.
For the new configuration, please refer to the doc here
Data Insights
The Data Insights application is meant to give you a quick glance at your data's state and allow you to take action based on the information you receive. To continue pursuing this objective, the application was completely refactored to allow customizability.
Part of this refactor was making Data Insights an internal application, no longer relying on an external pipeline. This means triggering Data Insights from the Python SDK will no longer be possible.
With this change you will need to run a backfill on the Data Insights for the last couple of days since the Data Assets data changed.
UI Changes
New Explore Page
Explore page displays hierarchically organized data assets by grouping them into services > database > schema > tables/stored procedures
. This helps users organically find the data asset they are looking for based on a known database or schema they were using. This is a new feature and changes the way the Explore page was built in previous releases.
Connector Schema Changes
In the latest release, several updates and enhancements have been made to the JSON schema across various connectors. These changes aim to improve security, configurability, and expand integration capabilities. Here's a detailed breakdown of the updates:
- KafkaConnect: Added
schemaRegistryTopicSuffixName
to enhance topic configuration flexibility for schema registries. - GCS Datalake: Introduced
bucketNames
field, allowing users to specify targeted storage buckets within the Google Cloud Storage environment. - OpenLineage: Added
saslConfig
to enhance security by enabling SASL (Simple Authentication and Security Layer) configuration. - Salesforce: Added sslConfig to strengthen the security layer for Salesforce connections by supporting SSL.
- DeltaLake: Updated schema by moving metastoreConnection to a newly created
metastoreConfig.json
file. Additionally, introducedconfigSource
to better define source configurations, with new support formetastoreConfig.json
andstorageConfig.json
. - Iceberg RestCatalog: Removed clientId and
clientSecret
as mandatory fields, making the schema more flexible for different authentication methods. - DBT Cloud Pipelines: Added as a new connector to support cloud-native data transformation workflows using DBT.
- Looker: Expanded support to include connections using GitLab integration, offering more flexible and secure version control.
- Tableau: Enhanced support by adding capabilities for connecting with
TableauPublishedDatasource
andTableauEmbeddedDatasource
, providing more granular control over data visualization and reporting.
Include DDL
During the Database Metadata ingestion, we can optionally pick up the DDL for both tables and views. During the metadata ingestion, we use the view DDLs to generate the View Lineage.
To reduce the processing time for out-of-the-box workflows, we are disabling the include DDL by default, whereas before, it was enabled, which potentially led to long-running workflows.
Secrets Manager
Starting with the release 1.5.0, the JWT Token for the bots will be sent to the Secrets Manager if you configured one. It won't appear anymore in your dag_generated_configs in Airflow.
Python SDK
The metadata insight
command has been removed. Since Data Insights application was moved to be an internal system application instead of relying on external pipelines the SDK command to run the pipeline was removed.
Upgrade Process
Replace the docker compose file
- Stop the running compose deployment with below command
- Download the Docker Compose Service File from OpenMetadata GitHub Release page here
- Replace the existing Docker Compose Service File with the one downloaded from the above step
Please make sure to go through breaking changes and release highlights.
- Start the Docker Compose Service with the below command
Post-Upgrade Steps
Reindex
Go to Settings
-> Applications
-> Search Indexing
Reindex
Click on Run Now
.
In the configuration section, you can select the entities you want to reindex.
Reindex
Since this is required after the upgrade, we want to reindex All
the entities.
(Optional) Update your OpenMetadata Ingestion Client
If you are running the ingestion workflows externally or using a custom Airflow installation, you need to make sure that the Python Client you use is aligned with the OpenMetadata server version.
For example, if you are upgrading the server to the version x.y.z
, you will need to update your client with
The plugin
parameter is a list of the sources that we want to ingest. An example would look like this openmetadata-ingestion[mysql,snowflake,s3]==1.2.0
. You will find specific instructions for each connector here.
Moreover, if working with your own Airflow deployment - not the openmetadata-ingestion
image - you will need to upgrade as well the openmetadata-managed-apis
version:
Re Deploy Ingestion Pipelines
Go to Settings
-> {service entity}
-> Pipelines
Re-deploy
Select the pipelines you want to Re Deploy click Re Deploy
.
If you are seeing broken dags select all the pipelines from all the services and re deploy the pipelines.
Openmetadata-ops Script
Overview
The openmetadata-ops
script is designed to manage and migrate databases and search indexes, reindex existing data into Elastic Search or OpenSearch, and redeploy service pipelines.
Usage
Commands
- analyze-tables
Migrates secrets from the database to the configured Secrets Manager. Note that this command does not support migrating between external Secrets Managers.
- changelog
Prints the change log of database migration.
- check-connection
Checks if a connection can be successfully obtained for the target database.
- deploy-pipelines
Deploys all the service pipelines.
- drop-create
Deletes any tables in the configured database and creates new tables based on the current version of OpenMetadata. This command also re-creates the search indexes.
- info
Shows the list of migrations applied and the pending migrations waiting to be applied on the target database.
- migrate
Migrates the OpenMetadata database schema and search index mappings.
- migrate-secrets
Migrates secrets from the database to the configured Secrets Manager. Note that this command does not support migrating between external Secrets Managers.
- reindex
Reindexes data into the search engine from the command line.
- repair
Repairs the DATABASE_CHANGE_LOG table, which is used to track all the migrations on the target database. This involves removing entries for the failed migrations and updating the checksum of migrations already applied on the target database.
- validate
Checks if all the migrations have been applied on the target database.
Examples
Display Help To display the help message:
Migrate Database Schema
To migrate the database schema and search index mappings:
Reindex Data
To reindex data into the search engine:
Troubleshooting
Permission Denied when running metadata openmetadata-imports-migration
If you have a Permission Denied
error thrown when running metadata openmetadata-imports-migration --change-config-file-path
you might need to change the permission on the /opt/airflow/dags
folder. SSH into the ingestion container and check the permission on the folder running the below commands
both the dags
folder and the files inside dags/
should have airflow root
permission. if this is not the case simply run the below command
Broken DAGs can't load config file: Permission Denied
You might need to change the permission on the /opt/airflow/dag_generated_config
folder. SSH into the ingestion container and check the permission on the folder running the below commands
both the dags
folder and the files inside dags/
should have airflow root
permission. if this is not the case simply run the below command