Requirements
- Advanced Analytics
Robin's Analytics API helps your team better understand the bigger picture of what’s happening at your workplace. This API helps you bring Robin data into your preferred tool for a more holistic workplace view, helping you drive your hybrid strategy based on criteria unique to your organization. Below is one example of how to do this with Redshift.
How to find your Account ID & generate an API token in Robin
1. Log into Robin and navigate to Manage > Integrations > scroll down to "API Tokens" section.
2. Click +Generate new token (captured above).
3. Name your token and check the "Basic_read permissions" box.
4. Click Generate Token.
5. Copy your token and save it.
How to create an export request
The code below will make an export request for a single day of desk reservations and then poll the API to determine when the request has been completed. When it is determined that the request has been completed, the data will be saved to disk with the filename as the from_date
provided.
The code below will check if the export request is ready up to 6 times with 10 seconds between each check – this is to ensure your requests aren’t rate-limited.
Note: If you do not already have the requests
package installed, you'll need to install it like so: pip install requests
It's not required, but it’s recommended to install packages in a virtual environment.
import os import requests from datetime import datetime from datetime import timedelta import time account_id = os.getenv('ACCOUNT_ID') # <-- add your account ID here, e.g. 00 read_token = os.getenv('READ_TOKEN') # <-- add your token here def make_export_creation_request(export_type, from_date): to_date = from_date + timedelta(1) print(f'Requesting an export from {from_date} to {to_date}.') url = f'https://api.robinpowered.com/v1.0/insights/exports/organizations/{account_id}/{export_type}' response_object = requests.post(url, headers={ 'Authorization': f'Access-token {read_token}', 'Content-type': 'application/json' }, json={'from': from_date.isoformat(), 'to': to_date.isoformat()}) response_object.raise_for_status() response = response_object.json() export_id = response['data']['export_id'] print(f'The export request has been received and assigned the ID {export_id}.') return export_id def retrieve_export(export_id, from_date): url = f'https://api.robinpowered.com/v1.0/insights/exports/{export_id}' iteration = 0 while iteration < 6: # 720: print(f'Attempting to retrieve export {export_id}') response_object = requests.get(url, headers={ 'Authorization': f'Access-token {read_token}' }) if response_object.status_code != 404: filename = f'./{from_date}.csv' response_object.raise_for_status() file = open(filename, 'wb') file.write(response_object.content) print(f'The CSV content was received and saved as {filename}.') break iteration += 1 print(f'{iteration}. Export not ready. Will try again in 10 seconds.') time.sleep(10) # Request that an event export be prepared. from_date = datetime.strptime('2022-12-01T00:00:00+0400', '%Y-%m-%dT%H:%M:%S%z') export_id = make_export_creation_request('desks', from_date) # Get the export contents and save it to disk. retrieve_export(export_id, from_date)
How to load a desk export CSV into Redshift
Before you can load the export API data into Redshift, you need to create a schema and table to load the data into. Follow the commands below:
1. Create a schema:
create schema test_schema;
2. Create a table:
create table test_schema.desk_export ( organization varchar, building varchar, floor varchar, desk_group varchar, desk_id integer, desk_name varchar, reservation_id varchar, series_id bigint, "type" varchar, "start" timestamp, "end" timestamp, hour_duration numeric, checked_in_at timestamp, canceled_at timestamp, cancellation_type varchar, creator_name varchar, creator_department varchar, creator_groups varchar, creator_email varchar, assignee_name varchar, assignee_department varchar, assignee_groups varchar, assignee_email varchar);
3. Now that you have a schema and a table, you need to use the Redshift COPY command to load data, but that requires you to upload your data to S3, either via the AWS Console or the AWS CLI. We've outlined the steps for both methods below.
A. Uploading data to S3 via the AWS console:
-
In the S3 console, select the bucket you wish to upload your data to, then hit the
UPLOAD
button. -
Next, click
Add Files
and select the file that you generated from the Export API. Finally, hitUpload
to confirm the upload.
B. Uploading data to S3 via the AWS CLI:
aws s3 cp /<path>/<to>/<local>/<file> s3://<bucket-name>/
When filled in with the proper values, the command above will copy a file from your local system to the specified S3 bucket. If you don’t wish to have your file at the top level of the bucket, you can add additional nesting by changing the s3 location:
aws s3 cp /<path>/<to>/<local>/<file> s3://<bucket-name>/<subdirectory>/<another-subdirectory>/
At this point, you have created a schema and table in Redshift to house our data and uploaded the data to s3. Now you need to connect the dots and COPY that data into your table.
Heads up!
As a prerequisite to running a COPY
command in Redshift, you'll need either an IAM role with the correct permissions or access to your AWS keys. AWS strongly suggests using Role-based authentication for COPY
commands when possible.
4. In Redshift, run the following command and fill in the appropriate values according to your preferred method.
With a role:
copy test_schema.desk_export
from 's3://<bucket-name>/<file-name>'
iam_role 'arn:aws:iam::<account-number>:role/<role-name>'
format as csv
ignoreheader 1;
With AWS keys:
copy test_schema.desk_export
from 's3://<bucket-name>/<file-name>'
access_key_id '<access-key-id>'
secret_access_key '<secret-access-key>'
session_token '<token>'
format as csv
ignoreheader 1;
5. Finally, you should have data in the test_schema.desk_export
table:
select * from test_schema.desk_export;