Challenges in data sharing and transfer
Case 1: Evaluating MLOps platforms
Not so long ago, I met with over 20 MLOps and another 10 DataOps companies to understand their challenges in achieving customer success during the platform evaluation phase. In particular, I was interested to learn of their workflows at the very first step in the evaluation process — that of data collection and transfer. I had a hunch this part of the pipeline posed challenges, based on my own experiences in evaluating these platforms.
I‘ll go over how the various platform companies address the existing challenges, some of the needs that remain unaddressed, and finally propose a new solution useful to both platform companies and their customers.
Avoid data transfer
To preempt the challenges that go along with customer data transfer, platform companies may choose to avoid transfer all together.
Public or synthetic data
The easiest solution is to use public or synthetic data sets to obtain benchmarks in order to compare and evaluate platforms. Of course, achieving good performance on public data sets is not as convincing as achieving good performance on the customer’s own data.
Another common approach in enabling customers to evaluate a piece of software is to allow them the option to run the code on their own cloud. This works particularly well for open source solutions but also for proprietary solutions that can be packaged and installed.
The challenge to this approach is that you need to ensure that your customers are savvy enough to use the platform as was intended—to ensure a fair and correct assessment of it capabilities and features.
This is difficult to achieve and one of the reasons why many software companies skip the self-hosted approach. By taking on the task of hosting, platform companies can more effectively manage versioning, bugs, and customer service.
Another drawback of a self-hosted solution has to do with compute resources. Platform companies usually tune services in order to manage costs and performance. Sometimes they move data to where GPU costs are affordable. This is not something that can be ensured in the self-hosted scenario.
The good news is that many platform companies are now offering cloud-native solutions spanning a variety of cloud providers. In this scenario, post-evaluation and post-contract, no transfer of data would be necessary.
But data transfer is inevitable
However, pre-contract and during the evaluation phase, customers are asked to transfer data to platform companies such that the full potential of the tools are put on display in a managed environment. This would also ensure that the results closely mirror the outputs as they would be in a real-world setting. And so platform companies and their customers value and prefer data transfer earlier in the relationship.
Data transfer is not that simple
Let’s go over some of the limitations of data transfer tooling today.
SFTP is not quite right for the cloud
One of the most popular workflows for data transfer is for the customer to set up an SFTP server, download data from their cloud data warehouse into a CSV, and then upload the data to the SFTP server. The receiver then downloads the data from the server and uploads it to their cloud warehouse. This last step usually requires some non-trivial tool.
Another approach would be to replace the SFTP server with a cloud storage bucket. In either case, data is extracted as a CSV file where important type information is lost, temporarily stored locally, then uploaded to the cloud at the destination.
This is a manual process and complex—particularly if data is to be kept in sync. With humans in the loop it is highly error-prone too; with monitoring being non-trivial as it leaves the confines of a single company.
Note that there are a variety of custom tools for cloud-to-cloud transfer. However, the market is fragmented such that a solution would be specific to a particular cloud provider. Managing egress costs is a headache in and of itself. This custom and ad hoc work is not something either side of the transaction is really motivated to take on.
Complex pipelines leave the scope of the data team
As the data transfer pipelines become complex, they leave the scope of the Data team and the task falls on the Engineering or DevOps teams to facilitate the transfer. This is not ideal as platform companies want to interact directly with the teams that will ultimately become the users of the platform. As they iterate on the required data to be transferred, it would be ideal to work directly with the team that has the domain knowledge and use case. A good tool empowers data teams to independently oversee these transactions.
Managing compliance is difficult
Platform customers are at all times required to stay compliant with HIPAA, GDPR, CCPA, and other legislation depending on the type of data they store.
Without the right tools, managing security and staying compliant is not easy and often close to impossible. Local machines are not the best place to store data, even temporarily, before uploading to an SFTP server.
Testing is difficult
Sometimes customers inadvertently send data containing PII (personally identifiable information) without the right contracts in place. In this scenario, platform teams would become responsible for scrubbing the data.
The status quo in data sharing and transfer means that it is difficult for either side to implement tests to validate the data in flight, for e.g., “Don’t send PII,” “Adhere to the following schema,” “Ensure data completeness.” Further, there is no reliable way to communicate requirements. As an example, platform companies might require that for a forecasting task, data contains entries for all of the days of the week. For a fraud detection task, they might require that all the main categories of fraudulent activity are included.
Versioning is difficult
When customers interact with multiple platform companies, each one might get a different version of the data. It’s hard to keep track of the source of truth, affecting the fairness of the evaluation. Which company has which data?
This lack of versioning also makes it difficult to manage audits and compliance.
Properties of a new solution
Is there be a better way to go about data exchange with business partners? Below I explore some of the properties of a new solution.
A cloud-agnostic solution that works for any two peers means platforms and their customers focus on the task at hand.
Auditability and consistency
By making it simple to share a single version of the data set with all platforms, a fair evaluation is ensured. Further, by managing a ledger of all transactions takes the challenge out of audits.
Security and compliance
Staying secure and compliant is easy when you have the right tools. Handling permissions and processes via toggles that enable masking and encryption allow companies to securely share data more quickly.
Data validation and testing
Both sides of the transfer can run tests to validate the data in flight. Customers needs to ensure that they are not mistakenly transferring private fields. Invoking automated tests that align with the various types of legislation is invaluable. Further, platform companies may have certain requirements about the data being shared. They too can create tests to ensure the conditions on customer data are met.
Unlike traditional software applications, evaluating and testing ML systems relies on quality data and lots of it. In every evaluation, much time is spent in bringing data to the platform. We aim to evolve from manual, script-driven transfers to a solution that is low-code, faster, and safer.
And it’s not just ML companies facing such challenges. Data transfer with business partners and across organizations is an underserved problem and impacts every single industry. In a future post, we discuss in detail the more general problem of cross-organization data transfer and sharing.