Product Docs
  • What is Dataworkz?
  • Getting Started
    • What You Will Need (Prerequisites)
    • Create with Default Settings: RAG Quickstart
    • Custom Settings: RAG Quickstart
    • Data Transformation Quickstart
    • Create an Agent: Quickstart
  • Concepts
    • RAG Applications
      • Overview
      • Ingestion
      • Embedding Models
      • Vectorization
      • Retrieve
    • AI Agents
      • Introduction
      • Overview
      • Tools
        • Implementation
      • Type
      • Tools Repository
      • Tool Execution Framework
      • Agents
      • Scenarios
      • Agent Builder
    • Data Studio
      • No-code Transformations
      • Datasets
      • Dataflows
        • Single Dataflows:
        • Composite dataflows:
        • Benefits of Dataflows:
      • Discovery
        • How to: Discovery
      • Lineage
        • Features of Lineage:
        • Viewing a dataset's lineage:
      • Catalog
      • Monitoring
      • Statistics
  • Guides
    • RAG Applications
      • Configure LLM's
        • AWS Bedrock
      • Embedding Models
        • Privately Hosted Embedding Models
        • Amazon Bedrock Hosted Embedding Model
        • OpenAI Embedding Model
      • Connecting Your Data
        • Finding Your Data Storage: Collections
      • Unstructured Data Ingestion
        • Ingesting Unstructured Data
        • Unstructured File Ingestion
        • Html/Sharepoint Ingestion
      • Create Vector Embeddings
        • How to Build the Vector embeddings from Scratch
        • How do Modify Existing Chunking/Embedding Dataflows
      • Response History
      • Creating RAG Experiments with Dataworkz
      • Advanced RAG - Access Control for your data corpus
    • AI Agents
      • Concepts
      • Tools
        • Dataset
        • AI App
        • Rest API
        • LLM Tool
        • Relational DB
        • MongoDB
        • Snowflake
      • Agent Builder
      • Agents
      • Guidelines
    • Data Studio
      • Transformation Functions
        • Column Transformations
          • String Operations
            • Format Operations
            • String Calculation Operations
            • Remove Stop Words Operation
            • Fuzzy Match Operation
            • Masking Operations
            • 1-way Hash Operation
            • Copy Operation
            • Unnest Operation
            • Convert Operation
            • Vlookup Operation
          • Numeric Operations
            • Tiles Operation
            • Numeric Calculation Operations
            • Custom Calculation Operation
            • Numeric Encode Operation
            • Mask Operation
            • 1-way Hash Operation
            • Copy Operation
            • Convert Operation
            • VLookup Operation
          • Boolean Operations
            • Mask Operation
            • 1-way Hash Operation
            • Copy Operation
          • Date Operations
            • Date Format Operations
            • Date Calculation Operations
            • Mask Operation
            • 1-way Hash Operation
            • Copy Operation
            • Encode Operation
            • Convert Operation
          • Datetime/Timestamp Operations
            • Datetime Format Operations
            • Datetime Calculation Operations
            • Mask Operation
            • 1-way Hash Operation
            • Copy Operation
            • Encode Operation
            • Page 1
        • Dataset Transformations
          • Utility Functions
            • Area Under the Curve
            • Page Rank Utility Function
            • Transpose Utility Function
            • Semantic Search Template Utility Function
            • New Header Utility Function
            • Transform to JSON Utility Function
            • Text Utility Function
            • UI Utility Function
          • Window Functions
          • Case Statement
            • Editor Query
            • UI Query
          • Filter
            • Editor Query
            • UI Query
      • Data Prep
        • Joins
          • Configuring a Join
        • Union
          • Configuring a Union
      • Working with CSV files
      • Job Monitoring
    • Utility Features
      • IP safelist
      • Connect to data source(s)
        • Cloud Data Platforms
          • AWS S3
          • BigQuery
          • Google Cloud Storage
          • Azure
          • Snowflake
          • Redshift
          • Databricks
        • Databases
          • MySQL
          • Microsoft SQL Server
          • Oracle
          • MariaDB
          • Postgres
          • DB2
          • MongoDB
          • Couchbase
          • Aerospike
          • Pinecone
        • SaaS Applications
          • Google Ads
          • Google Analytics
          • Marketo
          • Zoom
          • JIRA
          • Salesforce
          • Zendesk
          • Hubspot
          • Outreach
          • Fullstory
          • Pendo
          • Box
          • Google Sheets
          • Slack
          • OneDrive / Sharepoint
          • ServiceNow
          • Stripe
      • Authentication
      • User Management
    • How To
      • Data Lake to Salesforce
      • Embed RAG into your App
  • API
    • Generate API Key in Dataworkz
    • RAG Apps API
    • Agents API
  • Open Source License Types
Powered by GitBook
On this page
  • Prerequisite
  • Create Connector for JIRA
  • Add Configuration
  1. Guides
  2. Utility Features
  3. Connect to data source(s)
  4. SaaS Applications

JIRA

How to configure a JIRA connection

PreviousZoomNextSalesforce

Last updated 3 months ago

This document describes the Dataworkz connector configuration required to access JIRA.

Prerequisite

You need a JIRA Admin account for connecting to JIRA from Dataworkz. Follow the steps listed below for creating an OAuth App in JIRA for Dataworkz.

  1. Login to

  2. OAuth can be enabled by following the guidelines provided in . Keep in mind the following points while doing the same.

    • Callback URL can be found in Dataworkz UI ( section for details)

    • Permissions need to be added for JIRA API for

      • View JIRA issue data

      • View user profiles

      • Read Insight objects

  3. Make note of the Client ID & Secret. These details would be required at the time of creating the connector in Dataworkz by the authorized user

Create Connector for JIRA

  • Login to Dataworkz Application

  • Goto Configuration -> SaaS Applications -> JIRA

  • Click the + icon to add a new configuration

  • Enter name for the configuration in the above screen

  • Select JIRA installation type (Cloud/On-premise)

  • Select the authentication type

    • Private Connected App (OAuth App)

  • Custom App already created

    1. Select "Yes" if app has already been created.

    2. Select "No". Screen with the redirect URL would pop-up. Copy the redirect URL and goto section.

  • Enter the Client ID & Secret that was saved during the App creation

  • Select the scopes that need to be used

  • Select the workspace & collection

  • Click Save. This would prompt you to login to JIRA account and authorize Dataworkz to access JIRA APIs

Newly created connector would show up in the list of JIRA configurations

Add Configuration

Click the newly created connector and then click + icon to add configuration for JIRA

  1. Enter name for the dataset

  2. Select the JIRA project that you need to access

  3. Select the issue type

  4. Select the fields that need to be read

  5. Select the appropriate option for reading all the historical data or for a date range

  6. Select the incremental pull criteria

    • created

    • updated

  7. Enable/disable recurring job

  8. Click Add

This would complete the Dataworkz configuration for JIRA

JIRA developer console
JIRA documentation
Create Connector for JIRA
Prerequisite