Desk.com to Amazon S3

This page provides you with instructions on how to extract data from Desk.com and load it into Amazon S3. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is Desk?

Desk.com, owned by Salesforce, is an online help desk and customer service application aimed at small businesses. It supports email, phone, and social media channels, and provides a reporting interface for tracking ticket and agent performance.

What is S3?

Amazon S3 (Simple Storage Service) provides cloud-based object storage through a web service interface. You can use S3 to store and retrieve any amount of data, at any time, from anywhere on the web. S3 objects, which may be structured in any way, are stored in resources called buckets.

Getting data out of Desk

Desk.com provides a REST API that lets you pull information on dozens of categories. If you wanted to get a list of topics in your system, for example, you could call GET /api/v2/topics. You can provide optional parameters to limit and sort the information returned.

Sample Desk data

Desk.com's API returns JSON-format data. The data returned for a "list topics" call might look like this:

{
  "total_entries": 2,
  "page": 1,
  "_links": {
    "self": {
      "href": "/api/v2/topics?page=1&per_page=50",
      "class": "page"
    },
    "first": {
      "href": "/api/v2/topics?page=1&per_page=50",
      "class": "page"
    },
    "last": {
      "href": "/api/v2/topics?page=1&per_page=50",
      "class": "page"
    },
    "next": null,
    "previous": null
  },
  "_embedded": {
    "entries": [
      {
        "name": "Customer Support",
        "description": "This is key to going from good to great",
        "position": 1,
        "allow_questions": true,
        "in_support_center": true,
        "created_at": "2017-10-08T18:18:06Z",
        "updated_at": "2017-10-13T18:18:06Z",
        "_links": {
          "self": {
            "href": "/api/v2/topics/1",
            "class": "topic"
          },
          "articles": {
            "href": "/api/v2/topics/1/articles",
            "class": "article"
          },
          "translations": {
            "href": "/api/v2/topics/1/translations",
            "class": "topic_translation"
          }
        }
      },
      {
        "name": "Another Topic",
        "description": "Not the first one, but another one!",
        "position": 2,
        "allow_questions": true,
        "in_support_center": true,
        "created_at": "2017-10-08T18:18:06Z",
        "updated_at": "2017-10-13T18:18:06Z",
        "_links": {
          "self": {
            "href": "/api/v2/topics/2",
            "class": "topic"
          },
          "articles": {
            "href": "/api/v2/topics/2/articles",
            "class": "article"
          },
          "translations": {
            "href": "/api/v2/topics/2/translations",
            "class": "topic_translation"
          }
        }
      }
    ]
  }
}

Preparing Desk data

If you don’t already have a data structure in which to store the data you retrieve, you’ll have to create a schema for your data tables. Then, for each value in the response, you’ll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. Desk.com's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.

Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. This means you’ll likely have to create additional tables to capture the unpredictable cardinality in each record.

Loading data into Amazon S3

To upload files you must first create an S3 bucket. Once you have a bucket you can add an object to it. An object can be any kind of file: a text file, data file, photo, or anything else. You can optionally compress or encrypt the files before you load them.

Keeping Desk data up to date

At this point you've coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.

Instead, identify key fields that your script can use to bookmark its progression through the data and use to pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Desk.com.

And remember, as with any code, once you write it, you have to maintain it. If Salesforce modifies Desk.com's API, or the API sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to.

Other data warehouse options

S3 is great, but sometimes you want a more structured repository that can serve as a basis for BI reports and data analytics — in short, a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, Snowflake, Microsoft Azure SQL Data Warehouse, or Panoply, which are RDBMSes that use similar SQL syntax. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Azure SQL Data Warehouse, and To Panoply.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to move data from Desk.com to Amazon S3 automatically. With just a few clicks, Stitch starts extracting your Desk.com data via the API, structuring it in a way that's optimized for analysis, and inserting that data into your Amazon S3 data warehouse.