- 19 Mar 2024
- 4 minute read
- Print
- DarkLight
- PDF
Importing Data Overview
- Updated 19 Mar 2024
- 4 minute read
- Print
- DarkLight
- PDF
If you've come to this article, youβre probably looking to import data into Slate in one of four ways:
πΎ As part of a one-time migration of historical data from a legacy systemπ An ad-hoc upload of files or documentsπ As part of an recurring data feed from another system, and we have a pre-configured integration for that system. We call these pre-configured integrations source formats.β»οΈ Same as #3, but with a product for which we don't (yet) have a pre-configured integration. You can create a custom source format to handle the recurring import.
Your course of action in Slate is determined by whichever of the above options best describes your current situation. It's also likely that more than one may apply to your situation. Keep reading to see overviews of the four data import methods and explore their dedicated documentation.
1. Historical data migration
The importing of historical data from a legacy system into Slate is a multi-step process, in which your team segment the data in the historical system before moving it over in stages. The Slate tool you will use to accomplish this is Upload Dataset.
π Learn more
Check out the Historical Data Migration section and work your way through the articles for each type of data you need to bring over.
2. Ad-hoc file upload
Upload Dataset is Slate's import tool. Upload Dataset gives you an entirely self-service way to import data as needed.
π Learn more
Learn how to use Upload Dataset to import ad hoc files.
3. Recurring data feed with a pre-configured integration
Slate features pre-configured integrations with dozens of third party systems, including application sources, credential verification services, test score providers, and more. In Slate, the configurations that make these integrations possible are called source formats, and they help Slate parse incoming data and translate it into a format Slate can use.
Data comes in via batched files on an SFTP server or via web service calls. When the data arrives, a predefined source format handles all value and code translations. This helps ensure that year-over-year changes made to accommodate new fields or values are straightforward and can be handled by operations staff.
π Learn more
Explore the list of pre-configured integrations available in the Source Format Library.
4. Recurring data feed without a pre-configured integration
If your institution needs to consume a data feed from a third party system not yet included in our list of pre-made source formats, you can always create a new one for that purpose.
What file layouts can Slate consume?
Slate can consume Excel spreadsheets, delimited text files, fixed width files, XML, and JSON. We typically recommend using delimited files with column headers because you can add or remove columns at any time without negatively impacting the import process within Slate. This then allows for asynchronous changes to the data feed's specifications.
SFTP Imports
Most frequently, institutions deliver import files to an /incoming/ directory on the Technolutions SFTP servers, which are pulled at least once every 15 minutes. Files matching a specified filename mask are then loaded. The files are routed into our Upload Dataset interface.
It is also possible to pull a remote SFTP server, but only to the point of SFTP server availability. Since we can ensure that our servers remain highly available, the process is usually most reliable when using our infrastructure.
π Learn more
Learn how to create your own source format to handle recurring imports not already covered by our Source Format Library.
Web Services
Pulling from a Remote Endpoint Into Slate's Upload Dataset
This option allows Slate to pull external web services for new data and then process this data through the Upload Dataset interface, just as if the files were transferred via SFTP. These could include XML posts but can also include delimited data.
Since the data updates are processed through our Upload Dataset mechanism, changes to records can be queued, batched, and run in the most efficient manner possible, which minimizes or eliminates any potential for observable record locking.
Pushing Data into Upload Dataset through a Web Service Endpoint
This option uses web services to post files into Slate that are then processed by the Upload Dataset mechanism, just as if the files were transferred via SFTP. This is also like the process of pulling from a remote endpoint.
π Learn more
Explore related documentation on Web Services.
Document Imports
We recommend that files be sent using the industry-standard Document Import Processor (DIP) approach, where a zip archive is generated containing PDFs or TIFFs of the documents to be imported, along with an index file containing the filename of each document as well as any associated metadata parameters (such as EMPLID and document type). Slate can then extract the documents and index files to import the documents into the appropriate student records.
π Learn more
Explore related documentation on Document Imports.
β Best PracticeWe recommend delivering import documents in a zip file using SFTP since SFTP is much more efficient with the transmission of a single file (such as a zip archive) than with thousands of individual files. While documents could be imported using Web Services, we advise that imports are handled using SFTP, since a zip archive containing potentially numerous PDFs could be quite large.
PDFs are preferred instead of TIFFs, since a digital PDF of non-scanned data is a fraction of the size of a TIFF file. A TIFF file is a rasterized/bitmapped image without digital text content, and thus cannot be enlarged beyond the original resolution without a loss of fidelity.
Imports made via the Upload Dataset tool and material uploader have a 256 MB file size limit. In addition, there is a 15-minute processing time limit for documents to be uploaded.