Use Jira Software As a Content Source
SearchUnify can index the issues stored in Jira Software instances. This article walks you through the process of setting up Jira Software as a content source.
PERMISSIONS.
You should have access to the projects or spaces to be crawled. Ensure that you have Project Administer access. This access is required to fetch the permission schema in Jira.
You must have “Browse Users and Groups” permission. You must be in a group that has global permissions of “Browse Users and Groups” in Atlassian. This is needed to fetch the email and account id of the user to respect permissions.
The number of responses received from Jira over a given period depend upon the Jira's rate limits. You can find the the latest rate limits on their website.
Establish a Connection
-
Find Jira and click Add.
-
Under the Authentication tab, enter the required details:
-
Give your content source a Name.
-
Enter the web address of your Jira Software instance in Client URL.
-
Select an Authentication Method from the dropdown. Check out this doc to learn more - Which Authentication Method to Select for Jira Software?
NOTE 1 The Password field in Basic authentication requires an API token. This article explains how to generate it. Alternative, check out the official Atlassian documentation: Create an API token
NOTE 2 OAuth requires the existence of a Jira Software app. Learn how to create one. You can also learn how to create an app from the official Atlassian documentation: Generate an RSA public/private key pair.
-
Select the content Language. If your clients report issues in German, select -de and for French pick -fr. It's possible to set multiple content languages. The default content language is English (-en).
-
After entering the required details, click Connect.
Once the connection has been set up successfully, you will be prompted to the next action - Set Frequency.
Re-Connect
The Authentication screen is displayed when an already-created Content Source is opened for editing. An admin can edit a Content Source for multiple reasons, including:
-
To reauthenticate
-
To fix a crawl error
-
To change frequency
-
To add or remove an object or a field for crawling
When a Content Source is edited, either a Connect or a Re-Connect button is displayed.
-
Case 1: When the Connect button is displayed:
-
When the Connect button is displayed if the Content Source authentication is successful. Along with the button, a message is displayed There are no crawl errors and the Content Source authentication is valid.
-
Fig. The Connect button is displayed on the Authentication tab.
-
Case 2: When the Re-connect button is displayed:
-
The Re-connect button is displayed when the authentication details change or the authentication fails for any reason.
-
In both cases, the Content Source connection must be authenticated again. To reauthenticate a Content Source, enter the authentication details, and click Re-Connect.
-
Fig. The Re-Connect button is displayed on the Authentication tab.
Set Up Crawl Frequency
The first crawl is always performed manually after configuring the content source. In the Choose a Date field, select a date to start the crawl; only data created after the selected date will be crawled*. For now, leave the frequency set to its default value, Never, and click Set.
Fig. The Frequency tab when "Frequency" is set to "Never".
Select Fields for Indexing
You can index all issue
content fields or only a few of them.
-
Under the Rules tab, you will land on the By Content Type subtab.
-
Click EDIT to see the list of pre-configured fields. You can add custom fields whose data you want to crawl.
NOTE. You can add or delete the content fields. Although, it is not recommended for users other than Admins to make any changes in the fields.
- Navigate to By Projects.Use the index to find your project and select the projects whose data you want to crawl. If no project is selected, then all the projects are crawled.
- Press Save.
You have successfully added Jira as a content source in SearchUnify. Perform a manual crawl to start indexing data in SearchUnify.
Related
Find and Replace
Users on the Q2 '24 release or a later version will notice a new button next to each object on the Rules screen. It resembles a magnifying glass and is labeled "Find and Replace." You can use this feature to find and replace values in a single field or across all fields. The changes will occur in the search index and not in your content source.
Fig. The "Find and Replace" button on the Rules tab in the Actions column.
Find and Replace proves valuable in various scenarios. A common use case is when a product name is altered. Suppose your product name has changed from "SearchUnify" to "SUnify," and you wish for the search result titles to immediately reflect this change.
-
To make the change, click .
-
Now, choose either "All" or a specific content source field from the "Enter Name" dropdown. When "All" is selected, any value in the "Find" column is replaced with the corresponding value in the "Replace" column across all content source fields. If a particular field is chosen, the old value is replaced with the new value solely within the selected field.
-
Enter the value to be replaced in the Find column and the new value in the Replace column. Both columns accept regular expressions.
Fig. Snapshot of Find and Replace.
-
Click Add. You will see a warning if you are replacing a value in all fields.
-
Click Save to apply settings
-
Run a crawl for the updated values to reflect in the search results.
After the First Crawl
Return to the Content Sources screen and click in Actions. The number of indexed documents is updated after the crawl is complete. You can view crawl progress in in Actions. Documentation on crawl progress is in View Crawl Logs.
Once the first crawl is complete, click in Actions to open the content source for editing, and set a crawl frequency.
-
In Choose a Date, click to fire up a calendar and select a date. Only the data created or updated after the selected date is indexed.
-
The following options are available for the Frequency field:
-
When Never is selected, the content source is not crawled until an admin opts for a manual crawl on the Content Sources screen.
-
When Minutes is selected, a new dropdown appears where the admin can choose between three values: 15, 20, and 30. Picking 20 means that the content source crawling starts every 20 minutes.
-
When Hours is selected, a new dropdown is displayed where the admin can choose between eight values between 1, 2, 3, 4, 6, 8, 12, and 24. Selecting 8 initiates content crawling every 8 hours.
-
When Daily is selected, a new dropdown is displayed where the admin can pick a value between 0 and 23. If 15 is selected, the content source crawling starts at 3:00 p.m. (1500 hours) each day.
-
When Day of Week is selected, a new dropdown is displayed where the admin can pick a day of the week. If Tuesday is chosen, then content source crawling starts at 0000 hours on every Tuesday.
-
When Day of Month is selected, a new dropdown appears where the admin can select a value between 1 and 30. If 20 is chosen, then content source crawling starts on the 20th of each month.
It is recommended to pick a date between the 1st and 28th of the month. If 30 is chosen, then the crawler may throw an error in February. The error will be “Chosen date will not work for this month.”
-
When Yearly is selected, the content source crawling starts at midnight on 1 January each year.
Fig. The content source crawling starts at 00:00 on each Tuesday.
-
- Click Set to save the crawl frequency settings.
-
Click Save.
Data Deletion and SU Index
A method to update the index in real time is to enable event subscriptions, which supplement existing crawls and synchronize data between your Jira instance and SearchUnify in real time.The instructions are on Synchronize SearchUnify Index with Jira through Event Subscription
Jira-On-Premises Users.
If your organization uses Jira On-Premises, write to your customer support representative. They will share with you a list of SearchUnify IPs to be whitelisted. Once the IPs have been whitelisted, SearchUnify can crawl the data in your Jira instance.