
Lightdash
PRODIn this section, we provide guides and references to use the Lightdash connector.
Configure and schedule Lightdash metadata and profiler workflows from the OpenMetadata UI:
Requirements
To integrate Lightdash, ensure you are using OpenMetadata version 1.2.x or higher.
Python Requirements
To run the Lightdash ingestion, you will need to install:
Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Lightdash.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema
1. Define the YAML Config
This is a sample config for Lightdash:
Source Configuration - Service Connection
- Host and Port: Specify the network location where your Lightdash instance is accessible, combining both hostname and port in a URI format: either
http://hostname:port
orhttps://hostname:port
, based on your security needs. Example: For a local setup, usehttp://localhost:8080
; for a server deployment, it might behttps://lightdash.example.com:3000
. Ensure the specified port is open and accessible through network firewall settings.
- API Key: This key authenticates requests to your Lightdash instance. Keep the API Key secure, sharing it only with authorized applications or users.
- Project UUID: This unique identifier links API requests or configurations to a specific project in Lightdash.
- Space UUID: Identifies a specific "Space" in Lightdash, used to organize dashboards, charts, and assets.
- Proxy Authentication: If your Lightdash instance requires authentication through a proxy server, provide proxy credentials. Proxy authentication controls access to external resources and Lightdash.