ALTR Blog

The latest trends and best practices related to data governance, protection, and privacy.
BLOG SPOTLIGHT

Data Security for Generative AI: Where Do We Even Begin?

Navigating the chaos of data security in the age of GenAI—let’s break down what needs to happen next.
Data Security for GenAI

Browse All

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Way out over our skis

As a society we’ve been barreling down the path to being completely “data-driven” for the last ten to 20 years. We’ve produced and collected massive amounts of data and rapidly built up an ecosystem of technological conveniences on that data foundation—not thinking about the new risks that brings. But looming over the horizon are data privacy and security challenges so large that if we don’t solve them, we could actually regress from a technological perspective. All the conveniences we take for granted require that those hoards of data stay safe and secure. If we don’t halt those hazards in their tracks, they’ll only get worse, until the only choice is to lock down data, putting all the progress we’ve made at risk. In other words, we’re way out over our skis right now.

At ALTR, we’re committed to doing whatever it takes to overcome this challenge. Our mission is to solve for the root problems of data privacy and security and give everyone the tools to crush them.

Uniquely positioned to democratize data governance

We don’t believe high costs, long implementations, and big resource requirements should stop companies from controlling and protecting their data. And unlike many other companies in the data control and protection space, we’re not a legacy technology built for the datacenter and awkwardly, bulkily, expensively transitioned to the cloud. From our beginning, we saw the potential and power of building on the cloud, and as a cloud-native SaaS provider, we don’t have the overhead expenses others do. This means we can utilize advanced cloud technologies to solve these difficult problems and provide solutions to customers on cloud data platforms easily, quickly and at a very low cost.

And that’s what we’re doing. Today, we’re announcing three new plans that deliver simple, complete control over data. This includes our Free plan—the first and only in the market—which gives companies powerful data control and protection for free, for life, starting on Snowflake. Just like Snowflake is democratizing cloud data access, ALTR is democratizing cloud data governance. We’re freeing data governance so that everyone can control and protect their data.

Powerful cloud-based data control and protection that matures with your Snowflake journey

One of the reasons companies struggle to tackle the data privacy and security challenge is that they leave understanding who is consuming what data and why until later. With other solutions, it’s too expensive or too resource-intensive, so instead of adopting data governance early, as a preventative action, companies wait to understand and manage data usage until it becomes required to comply with regulations, or worse, they have a privacy or security incident.

ALTR’s Free plan clears away those objections, supporting companies’ initial forays into Snowflake by enabling easy yet powerful data control and protection from the start, with no cost and no commitment. Our new data usage analytics capability shows who your top Snowflake users are and gives clear visibility into what and how much data they’re consuming. This allows you to better understand what normal is, create controls based on necessary usage, and quickly identify and investigate anomalies. At a high level, this intelligence will help you assess the value of your Snowflake project, plan your future roadmap, and put you in a better position to solve problems that might arise later.

The ALTR Free plan is available here on our website or now as the first complete data control and protection solution via Snowflake Partner Connect.

As you mature in your cloud data platform journey, better understand your use cases and start thinking about adding sensitive data, you may want to upgrade to ALTR Enterprise for expanded access controls, compliance support and enterprise integrations. Once you’re more advanced and have migrated all your sensitive data, you may be stuck manually implementing governance features which can make the cost of ownership and maintenance very high. ALTR Enterprise Plus can help you can automate to save time and scale more easily, utilize powerful data security features like tokenization, and get the support of our experienced customer success team.

Ending the data privacy and security problem

Data governance tools that are too costly, require too much time and too many resources to implement and maintain are crippling our ability to take on the big data privacy and security problems we know we must in order to continue our pace of technological advancement. Leaving data at risk is just not sustainable for anyone. ALTR's release today is the next step in our journey to solve these problems and deliver formidable solutions that are easy to use and easy to buy – so everyone can control and protect their data, wherever they are on their data journey.

Get started for free today.

I think it’s safe to say that everyone in the technology industry was shocked by last week’s Amazon Web Services outage. As one of the major backbones of cloud-based Internet services, Amazon’s issues affected everything from Disney+, Netflix and Roku streaming to services such as Venmo and CashApp to the company’s own delivery drivers. Although this was unprecedented for Amazon, it was definitely a wakeup call to anyone who relies on cloud hosting services. And it underscored the need for true SaaS-based, multi-region “high availability” solutions to support resiliency in modern data architectures.  

Between a blip and a disaster

According to Amazon, the issue originated in its US-East-1 region in Virginia. Of course, Amazon’s services include high availability within its East, West and other global regions, so if there’s an issue in one datacenter or zone within a single region, workloads can be routed to different locations within the same region to ensure uptime. This incident was so shocking because the number of core services affected in a single region increased the blast radius so significantly.  

Yet, it might not qualify as a true “disaster” as defined in most disaster recovery planning. Those are natural events (hurricanes, tornadoes, tsunamis) or man-made (accidental/intentional, terrorism, hacking) that cause a long-term disruption to the business. In this case, there was no natural event or obvious intentional sabotage that would clearly impact availability longer term. While companies could have implemented their disaster recovery plans, doing so would have come with costs: potential loss of a limited amount of data, absorption of employee resources to implement the plan, the time necessary to reverse the changes once the incident had passed, if it wasn’t permanent. There was no way to know how temporary or long lasting this incident might be so executing a disaster recovery plan could have placed an additional and unnecessary burden on the business.  

That still leaves a gap between normal operations and full-on disaster recovery – a gap that can cause major issues. Many software companies today pride themselves on the “five nines” – 99.999% uptime in a given year. Some even include that SLA in their contracts. Even though this incident lasted less than a day, it was potentially enough to drop affected companies’ uptime for 2021 from five nines to three: 99.9%. They lost two orders of magnitude in one incident.  

In the critical path of data

For modern enterprises who rely on data to run their internal operations, uptime and availability are just as important, and in the modern data architecture, the entire system is only as reliable as its weakest link. This is especially critical right now for industries like banking where the movement of money, and in fact the entire system, relies on the flow of data. But this will become increasingly important across all industries as data becomes more and more essential to core business functions.  

That means when you’re picking your database, your ETL provider, and your BI tools, availability needs to be a key “non-functional” factor you evaluate as part of your buying process along with any features you need. When it comes to data control and security solutions, ALTR’s availability is unmatched. Because we interact with and follow data from on-prem through the ETL process to the cloud, we’re in the critical path of data, making it crucial that our service keep up with the rest of the data ecosystem. Our answer is multi-region, high availability built into normal operations via a true multi-tenant SaaS solution. Because SaaS allows economies of scale, we don’t have to build and maintain dedicated single-customer infrastructure in multiple geographic regions. We just have to construct ALTR infrastructure in multiple geographies for all our customers to leverage, significantly reducing cost and complexity.  

Disruption doesn’t have to slow you down

This incident highlighted a weakness in resiliency planning: what happens when something less than a disaster causes a significant disruption to the business? We don’t think the uptime guarantees many have trusted to date are strong enough for today’s business environment, especially for business-critical data infrastructure. The future is high availability based on multi-region redundancy – we expect this to be a broad and consistent theme across industries. And for essential data control and security, that means ALTR.

Cloud computing has disrupted just about every single industry since its inception in the mid-2000s, and financial services are no exception. According to a 2020 Accenture report, banking is just behind industries like ecommerce/retail, high tech electronics, and pharma/life sciences on the cloud adoption maturity curve.  

In fact, the average bank has 58 percent of its workloads in the cloud, but the majority run on private rather than public cloud which limits the cost-savings and benefits. The emphasis on private cloud may be due to the unique challenges the banking industry faces, including meeting stringent regulatory requirements on infrastructure they don’t own. And for banks that have made no move to the cloud, investments in legacy systems and the difficulty transitioning to the cloud may be an additional roadblock.  

For smaller and midsize banks in the sub $10B asset class, the decision to move the cloud can be even trickier. They may feel pressure from their boards to adopt this powerful new technology while also being strongly reminded to do so safely! Regional banks often build their reputations on the trust of local communities which makes any change that could damage that an enormous risk. At the same time, they may be competing with larger players that have adopted the cloud at a rapid pace to provide cutting-edge services to consumers. Capital One became the first major bank to go cloud-only when it closed its own datacenters entirely and moved all operations to AWS public cloud. Accenture found that moving swiftly to the cloud is paying off for banking “cloud leaders”—they’re growing revenue twice as fast as the “laggards”.  

So, it’s clear that the move to the cloud is coming for even smaller banks, but where to start? We suggest taking a look at your enterprise data warehouse.  

Take the first step into the cloud with your enterprise data warehouse

For banks that need to minimize risk, are unsure how to make the move safely and properly, may not have a CISO or even dedicated security team, and whose IT teams that are focused on managing their own iron in their own datacenters, taking the first step to the cloud can seem like a heavy lift. It doesn’t have to be.  

You can start small with your enterprise data warehouse. This is often a SQL server in your own datacenter where you collect a daily or weekly data dump from your core systems. It may contain some sensitive data that’s accessed by various groups around the company via business intelligence tools like PowerBI, Qlik or Tableau installed on user desktops. Marketing for example, might run zip code reports on deposits in order to make targeted offers on mortgages or car loans.  

The key is that the data is already consolidated and it’s not core software – that reduces both the complexity and the risk. Moving this workload to Snowflake with ALTR data governance and security is fairly straightforward and provides several interesting advantages for small and mid-size financial institutions:  

  • Reduced costs: Eliminate the expense of maintaining the datacenter server infrastructure required for an on-prem data warehouse. With Snowflake, you pay for storage separately from compute, minimizing costs.  
  • Increased reliability and scalability with enterprise-level security: Snowflake’s unique architecture allows virtually all your users and data workloads to access a single copy of your data without impacting performance. In addition, Snowflake and ALTR offer SOC 2 Type 2, PCI DSS compliance, and support for HIPAA compliance.  
  • Expanded BI access: You can standardize on a SaaS business intelligence tool you don’t have to install or maintain. This allows you to provide easy access to more users across company to make best use of your data.  
  • Enriched data with third party datasets: Snowflake Data Marketplace supplies more than 500 live and ready-to-query data sets from more than 140 third-party data and data service providers to enhance your own datasets and provide deeper, actionable insights for your organization.  
  • Integrated cloud-native data governance and security: With ALTR, you can discover and classify sensitive data as it’s moved to the cloud then instantly and automatically restrict access based on those classifications. You also get complete observability and protection over how data is consumed regardless of access point.  

Snowflake + ALTR = your secure cloud data warehouse-in-a-box

Migrating your enterprise data warehouse workload to Snowflake with ALTR essentially gives you a “secure cloud data warehouse-in-a-box”. You can take on this discreet pilot project, with minimal investment of time, resources, cost and risk, and learn how the cloud works best for your financial institution. This allows you to start operationalizing your process for moving additional workloads to the cloud safely.

In addition to the experience gained, there can be measurable results. Accenture estimates that when financial institutions move data warehousing and reporting workloads to the cloud they could see a 20-60% reduction in costs, increased operational efficiencies, improved real-time data availability, and better data governance and lineage.  

Not bad for your first adventure into the cloud.

Ready to start your secure cloud data warehouse journey? Let us show you how easy it is to get started. Get a demo today.

Everybody is doing it: the cloud data migration. Whether you call your project “data re-platform”, “data modernization”, “cloud data warehouse adoption,” "moving data to the cloud” or any of the other hot buzz phrases, the idea is the same: move data from multiple on-prem and SaaS-based systems and data storage into a centralized cloud data warehouse where you can use that data to spend less money or make more money. In other words, the goal of consolidating into a cloud data warehouse is, at its core, to save company money on cost of goods sold or grow company revenue streams.

What’s not to love? But one of the tradeoffs is that, as you go through the process, you end up losing the full visibility and control you had over data in your on-prem systems leading to cloud migration risks and cloud data migration challenges you might not expect.

Cloud data migration responsibilities

We see this concern about visibility and control come up over and over, at every stage of the cloud data migration journey – from CIOS, CISOs and CDOs who are accountable for making sure data stays secure, to the leaders who must address this in the overall project within the given budget, until it finally lands on the Data Engineers and DBAs who must decide how they’ll fix this and pick the actual solution. Here's how to mitigate cloud migration risks...  

Overcome Cloud Data Migration Challenges

Putting your cloud data migration project on rails out of the gate means you can move more quickly, more securely. Even if you’re going 100 MPH with data, you can be sure you’re not going to:  

  • Fly off the path
  • Put your data or company at risk
  • Damage your reputation  

How do you do that? No matter if you’re using an ETL or Snowpipe to transfer data, choosing a cloud-hosted BI (Business Intelligence) tool or an on-prem solution to run analytics, doing it all with visibility, control, and protection in place from the very moment you begin will mean that your continuous data path from on-prem to the cloud will be safe, speedy and secure.  

And those pesky cloud data migration challenges that could pop up at each stage won’t slow you down or derail the project:  

Selecting a Cloud Data Warehouse:  

Whether it’s Snowflake, Amazon Redshift, Google, you’ll need to answer these to get your cloud data migration project off the ground…

  • How do I control user access to the data?
  • What level of audit logging do I have? And what do I need?
  • How can I integrate this with my existing security stack (SSO, MFA, Splunk)?
  • If data cannot be in clear text, is encryption okay or will I need a tokenization solution?

Selecting a BI tool:

Again, whether you’re looking at Tableau, Qlik, PowerBI or other, similar operational questions will come up…  

  • Can I give different users using a shared service account varying levels of access?  
  • If I wanted to do tokenization, where in the stack is the best place to implement it? Should it be in the BI tool or elsewhere?  
  • How can I see what each user is doing through a single service account?  
  • How do I know what sensitive data is being accessed in the reports? How can I control that data?  

Choosing an ETL:

When moving the actual data from data sources to the new cloud data warehouse, if these questions don’t come up, they should…  

Implementing a Data Catalog:

Data “brains”, like Collibra, OneTrust, and Alation, have all the information about users, data itself, data classifications and the policies that should be placed on user data access. What they don’t have is a way to operationalize that policy.  

If you can be prepared to answer these questions from the start, you can avoid cloud migration risks, overcome cloud data migration challenges, and the good news will travel back up the line very, very quickly to the executives who need to ensure that data visibility, control and protection are covered.  

ALTR is here to help you solve the problem at a very low cost, with a low-friction implementation, and a short time to value. You can start with ALTR’s free plan today and upgrade when you have too many users and too much data to govern at scale without an automated data control and policy enforcement solution like ALTR.  

Put your cloud data migration on rails with ALTR!

Since Salesforce launched at the end of last century, the cloud application boom has been unstoppable. Along with that has come another boom: cloud-hosted data. The rise of digital transformation, as well as other trends like mobile and IoT, has led to a massive increase in the amount of data created. In fact, 64.2 Zettabytes of data was created or replicated in 2020 according to IDC. That’s 10^21 bytes of data!

Now, all that data represents a rich resource of knowledge to business – from where consumers visit online to how companies make purchases. And the best way to get value from it is to consolidate the multitude of data points and put machine learning, AI or Big Data tools on top of it to connect the dots. This data analysis can either be done in an on-premises data warehouse or in the cloud. Doing it in the cloud delivers some compelling benefits including virtually unlimited scalability with no costs for infrastructure investment and lower ongoing maintenance. The attractiveness of the cloud data warehouse model is one of the reasons Snowflake debuted with the biggest software IPO ever in 2020.  

But consolidating all this data, especially sensitive data, into the cloud creates a serious challenge for Chief Information Security Officers (CISOs): how can they be 100% responsible for data security when they have 0% control over the infrastructure where it’s stored?  

The cloud data accountability/control mismatch

CISOs and their security teams had their roles nailed down: secure the datacenters with firewalls, stop employees from clicking on phishing emails or accessing malware infected websites, and protect the company perimeter from hackers and outside threats. These were tactics meant to deliver specific and important end results: keep the network safe and protect company data. Forrester Research calls this “Zero-Trust”, but it’s a perimeter defense mechanism that does not apply to the “perimeter-less” cloud.

But today, a Chief Marketing Officer (CMO) may look at the rich data streams moving throughout the company, generated by 15 or 20 different applications, with hundreds of data points about customers and prospects, and make the argument that if only that data were combined, it could deliver a minutely-detailed composite of individual users and buyers – and marketing could raise revenue by 8%.  

The CMO gets the go ahead to move that data to Snowflake, but where does that leave the CISO? Suddenly, the data is in an environment he or she doesn’t control. Increasingly the business project is taking a much higher priority and security is trying to catch up. The CISO is still responsible for securing data that’s been moved outside the nice, cozy, protected perimeter the security team has spent years perfecting. If there’s a data breach, they’re still on the hook, they could still get fired, but how can they stop that if they don’t control the space?

The CISO is still responsible, even when data leaves home

Think of it like a parent who lets their children stay overnight at a friend’s house. The parent is still responsible for the child’s safety, so shouldn’t they ask the friend’s parents some questions? Find out about the culture of the home? Who the parents’ friends are? What kind of rules they impose? The parent doesn’t stop being responsible or stop worrying once their child leaves the home. And they certainly don’t lock their children up at home in order to “keep them safe” – that’s not reasonable.  

Some CISOs and Chief Risk Officers try to maintain control by placing stringent rules around how the data can be stored and used in cloud data warehouses. I’m aware of one that requires sensitive data to be stored on Snowflake only when encrypted or tokenized. In order to be used or operated on, it has to be moved into a secure on-prem environment the CISO controls, de-crypted/de-tokenized, utilized, then encrypted or tokenized before being transferred back to Snowflake.  

It may be secure, but it’s like making your child come home to ask permission before playing a game or having a snack at the sleepover. It’s really clunky and slows things down. Some security execs are jumping through a lot of hoops to overcome this accountability and control mismatch.  

Others are just abdicating control and trusting cloud data warehouse providers. This leaves a hole in security: these providers have taken over responsibility for maintaining the infrastructure, the perimeter, the physical space, but they’re not taking on the responsibility of user identity and access or the data itself – that still resides with the company, especially the CISO. To be clear, Snowflake is very secure, but the more successful they become the more a target they are for bad actors and especially nations-states.

Moving beyond perimeter-centric to data-centric security

This shift to the cloud really requires a shift in the security mindset: from perimeter-centric to data-centric security. It means CISOs and security teams need to stop thinking about hardware, datacenters, perimeters and start focusing on the end goal: protecting the data itself. They need to embrace data governance and security policies around data. They need to understand who should have access to the data, understand how data is used, and place controls and protections around data access. They should look for a combined data governance and security solution that delivers complete data control and protection.  

Because bad actors don’t care who’s responsible—they’re going where the data is and taking advantage of any holes they find. The 2021 Verizon Data Breach Investigations Report (DBIR) showed this clearly: this year 73 percent of the cybersecurity incidents involved external cloud assets. This is a complete flip-flop from 2019, when cloud assets were only involved in 27 percent of breaches.

Regulators also don’t care where data is when it comes to responsibility for keeping it safe: it’s on the company who collects it. Larger companies in more regulated industries face very large, really punitive fines if there’s a data leak—which can lead to severe consequences for the business…and the CISOs responsible.  

If CISOs want to not only catch up to but get ahead of business priorities, bad actors, and regulatory requirements, they need to focus on controlling, protecting and minimizing risk to data—wherever it is.

ALTR’s origin story, like that of a lot of companies, starts with pain. In the early 2010s a group of engineers working at a technology company in the options trading space found themselves contending with problems that had no solutions.

They held data that could very easily and quickly be used for personal financial gain by a thief, and they had no tamperproof records of who was accessing it. They had no way to control that access in real time. And worst of all, they had no feasible way to protect data from those who would log straight into their required co-located servers to steal it – while still keeping the data functional for the business.

The reason why these solutions didn’t exist was not because no one had thought of them. At the time, database access monitoring appliances and encryption devices with sophisticated key management were available.

The problem was that the world had changed.

Like Jeff Bridges’ character Flynn in the classic movie TRON, who has his body digitized and finds himself inside of a computer – the world of connected computing had started to detach from its physical roots. Virtualization meant that a computing workload might exist on any device, and the onset of cloud computing and massive proliferation of mobile devices meant that it might not be on a device that was identifiable.

A security model built on a solid physical network topology with endpoints, routers, switches, and servers, was melting away – and with it the first generation of security solutions that were built for that universe.

Taking its place, a logical model. Identities that could be in a café in Turkey or at home in Austin, or hurtling across the sky at hundreds of miles per hour (as the humble author of this post is, right now), accessing workloads and data that also might be anywhere – inside of an “availability zone” instead of on a known server.

This new universe has come very quickly and brought new threats with it, and created a crack between itself and the old one that is literally leaking data. This is because many are trying to fight the new threats with the old tools, looking for the power plug to pull from the wall or the hard drive to wipe when there just isn’t one.

Jeff Bridges didn’t beat the evil Master Control Program by rejecting how his world had suddenly and completely changed. He went with the flow, man, and beat it at its own game.

ALTR’s products are designed for this new world, but this blog isn’t about them. We have a whole website dedicated to that. It’s about exploring the corners of this new universe, highlighting the best ideas from those around the computing and security worlds, and adding our own voice to the conversation. We hope you’ll follow along.

Here in Colorado, it’s just about winter sports season. And that means I’m thinking about making the drive up to Summit County to take advantage of some of best skiing anywhere. The destination is completely worth it, but the road is not without its potential risks: slippery inclines, dramatic switchbacks, snow drifts and 18-wheelers barreling through the weather toward the West Coast.  

Businesses today face a similar situation: the ability to use data to gather insights across the enterprise is an exhilarating goal, even though getting there can require overcoming some hazards. Many enterprises are moving company information into cloud data platforms like Snowflake in big data analytics projects that take advantage of scalable storage, accessible compute power, and integrations with cloud-based BI tools for a sophisticated view of every part of the business. To get the full picture though, sensitive data must often be included. Whether that’s personal customer data or highly restricted business information, uploading and utilizing that sensitive data creates a risk due to privacy regulations and confidentiality concerns.  

But just like the drive up the mountain, there are technologies that can make it easier and safer for companies to make the journey. Here are three examples where ALTR’s technology can help companies reach their big data analytics goals:  

Big Data Analytics Insight #1: Determine the real cost to serve customers

The CFO of a logistics company wants to determine the actual costs to serve their customers in order to better align pricing and improve margins. They pull operational data, inventory management, warehousing, and fuel and vehicle maintenance costs into Snowflake. But this doesn't provide a full picture without including the costs of the people doing the work. The last key piece of data is compensation information for each employee involved in delivering the products. However, unlike the other information, this is highly sensitive information about what each individual employee gets paid along with their banking info. Putting it into Snowflake means that it could be accessible to some employees outside the finance and HR teams, like the Snowflake admin, for example.

big data analytics

ALTR automates and makes the handling of this data easy, and because ALTR sits outside Snowflake, we’re able to create a secure mechanism that delivers an alert every time that sensitive data is accessed – by anyone. The logistics company is able to utilize all the required information – even private payroll data – to accomplish their big data analytics goal.  

Big Data Analytics Insight #2: Better model customer buying behavior to boost sales

The CMO of a consumer goods company wants to get a holistic view of its customers and their buying behavior, but data is spread across multiple on-premises and cloud-based systems: Salesforce, Marketo, eCommerce sites, backend ERP systems, and customer behavior analytics tools. In order to tie demographic information about specific buyers to their online activity and buying activity, the data all needs to be in one place with at least one common value, usually a piece of PII (name, email, SS#). With this, marketing teams can look for buying indicators in localized regions: perhaps a mom looked at a specific blog post before purchasing diapers in Austin, TX. Maybe that’s a pattern: several moms looked at that post before buying diapers in Austin. Then marketing can use that insight to promote that blog to other moms in Austin, to drive similar purchases.

big data analytics

ALTR allows companies to protect sensitive PII data easily in Snowflake. You can find and classify personal information, see how it’s being used, then set policies to control access and limit consumption in the event of a policy infringement. Marketing teams can safely (and in compliance with privacy regulations) use sensitive data to do multivariate analysis, create an accurate model of customer behavior, and uncover opportunities to grow sales.  

Big Data Analytics Insight #3: Discover ways to optimize specific sales territories or business units

As the value of data analytics has grown, the number of people across the business who have or want access has grown in tandem. It’s no longer a handful of data engineers or analysts who can peer deep into every corner of the business but everyone from marketing to finance to HR to engineering to sales who wants to access operational, sales, marketing or finance data to make better decisions. A sales manager may want to get a better view into her territory – looking at past sales and annual trends or hot industries to find opportunities for the next quarter. All of the data to drive these insights will have to be consolidated into a single repository like Snowflake for cross reference, along with the same data for every other sales territory. In order to make her territory data available to that sales manager, the other territory data must be made unavailable so that it stays confidential.

big data analytics

ALTR enables companies to easily set access policies based on role for any data deemed sensitive or confidential. That means your database admin can ensure each sales manager – or any other role in the company – only has visibility into the information they need to do their jobs better. The ability to easily control access means data can be made more freely available.    

This Big Data Analytics Journey is All About the Destination

Think of ALTR as a set of snow tires you put on at the beginning of the season, as you head into sensitive data territory. They’re easy to install and equipped to help you make the journey up the mountain whenever you’re ready to go. And the best part is that once you have them, you can stop thinking about the risks and concentrate on the amazing view.

big data analytics

Growing up in Austin, I’ve been steeped in technology from the start – we were either talking about tech, music, or BBQ basically. And one of the things I heard was that “every company is a software company.” It apparently originated in Watts S. Humphrey’s book, Winning with Software: An Executive Strategy, but was repeated in 2015 by Microsoft CEO Satya Nadella. The idea was really simple: use software to spend less money and/or make more money. Companies wanted to find the most cost-effective source materials, track orders more efficiently, or use the web to reach more customers to increase sales.

Today we’re hearing a new mantra: every company is supposed to be a “data-driven company”. But this is a more difficult transition that requires we overcome new regulatory roadblocks. Luckily, we have a winning strategy from the past to guide us.

The low-code/no-code path to becoming a “software company”

There weren’t really any external barriers to becoming “software companies.” Companies just had to hire people who could create software, and they were off and running. The only real limit was knowledge about what software could do and the number of people available who could write the code. Some industries including financial, healthcare and tech were well ahead of this technology curve. But other industries like consumer goods or manufacturing had to catch up. This created a “skills gap” that initially forced companies to hire expensive consultants to come in, evaluate the business, tell them where technology could improve service or increase sales, scope the technology needed, and then deliver it. This might help a company make more money in the long run or increase efficiencies in certain areas, but it certainly wasn’t quick or cheap.

Companies eventually overcame this hurdle with the advent of low-code or no-code tools — basically software that could do the code writing for users. WYSIWYG (what you see is what you get) tools like WIX or Squarespace for websites are one example. Before these self-service tools, company websites were owned and managed by the IT group because they required server set up and Java and HTML code to create and manage. Once these easy-to-use tools rolled out, employees across the enterprise gained the power that only coders and developers had before. The people who own making that part of the business more modern could do it themselves with these tools, creating an inflection point in technology adoption. The marketing team could now manage their own website, and a typo on the home page no longer had to be a major crisis.

data-driven company

Privacy regulations create a headwind to becoming data-driven

The next generation of this idea is that “every company is a data-driven company” — it’s the latest strategy for companies to make more money or save more money. It makes sense — after companies developed and rolled out all that software, they started ingesting and generating a ton of data about the business and their customers. Companies now have personal information on millions of users, data on how those users use their software, where, when and how. Software also collected digital healthcare data, financial data, purchasing data, logistics data, mobile data, Internet of Things data. Just about everything that exists today – person or object – could have data associated with it that is collected and stored.  

While companies are racing to optimize and monetize this amazing resource with technology to uncover insights and share those across the business, they face a new headwind they didn’t in the transition to becoming a software company: data privacy regulations. There were no regulations around writing code, but seemingly every month more and more laws come out directing how PII, PHI, and PCI data can be stored and shared. The E.U.’s GDPR data privacy law went into effect in 2018, California’s CCPA in 2020 and, this year, Virginia and Colorado passed regulations that protect consumer privacy. These regulations come with steep penalties for data leaks or misuse—California’s privacy act fines, for example, can range from $2,500 to $7,500 per record.

No-code, low-code software makes data governance self-service and turns a headwind into a tailwind

Overcoming this headwind means putting something in place to comply with these regulations. Like the industries that started behind in the transition to becoming a software company, many companies today may be early in their journey to data governance – according to TDWI research just 25% have a data governance program and 44% are only in the design phase. The rest aren’t even thinking about it yet. And they face a similar skills gap. Several of today’s governance technologies are based on legacy infrastructure that not only involve big investments in time, money, and human resources to implement, but also require expensive developers to set up and maintain. Because, guess what, just like the early days of software, they need people who can code!  

The good news is that we already know the solution to that challenge: create no-code/low-code tools that allow non-coders to rollout and manage the data governance solution themselves. This is where ALTR is ahead of the curve. Our cloud-based data access control and security platform requires no code to set up or maintain. Any user can easily automate policy enforcement with a few clicks in the ALTR interface, immediately see how sensitive data is being consumed, and document that access and usage to comply with all relevant privacy regulations. No one needs to know SnowSQL, Apache Ranger, YAML or any other code – the activation of governance policies and controls can be handed off to the data governance teams or any other non-coders to implement and manage. Not only does the process become faster, it becomes less error prone. Governance teams can see that the policies are working correctly with their own eyes, and they can adjust immediately if there’s something off – just like a marketing team can fix a typo on their website.  

data-driven company

By delivering data control and protection solutions that anyone can use, we’re allowing data governance to be self-service and enabling companies to utilize sensitive data because anyone can implement the necessary controls and protections.  

Becoming Data-Driven Faster

And now with our ALTR Free (forever) plan, we’re truly democratizing data governance: we’re removing all the skills, expense or resource roadblocks standing in the way, so that everyone can control and protect data their data easily, swiftly and freely. Companies can turn the headwind into a tailwind to more quickly get more value out of data and become the “data-driven companies” they need to be.  

data-driven company

The concept of “digital transformation” must be familiar to just about everyone in the business world at this point. Our applications and activities continue their migration from on-premises into the cloud at a rapid rate. With this comes the idea of “digital data transformation”. Data isn’t just moved as is, but should be re-imagined, remodeled and modernized to get the most from the cloud data platforms that will be hosting it.  

To this end, ALTR has partnered with emerging digital data transformation firm DataSwitch.

DataSwitch's no-code and low-code platform along with cloud data expertise and unique, automated schema generation accelerate time to market. DataSwitch provides an end-to-end data, schema and ETL process migration with automated re-platforming and refactoring, thereby delivering faster time to market and significant reduction in cost of migration.

The DataSwitch Migrate toolkit leverages advanced automation to migrate schema, data, and processes from legacy databases like Oracle, Teradata, Netezza, and SQL server and ETL tools like Informatica, Datastage, and SSIS to modern cloud-based data warehouses like AWS Redshift, Snowflake, and Google BigQuery and integration platforms like Databricks, Spark, SnapLogic, and Matillion, etc. And all this is delivered with built-in technology best practices.  

DS Migrate automates data transformation by implementing three key components which include:  

  • Schema Redesign: Intuitive, predictive and self-serviceable schema redesign and restructuring from old school data models to new generation data models (including JSON and Graph).
  • Data Migration: Enhanced automation, cloud expertise and automated schema generation to accelerate data migration.  
  • Processes Conversion: No-touch code conversion from legacy data scripts and ETL tools to modern database scripts and ETL tools, through an adoptive Translator + Compiler design.

With the full suite of DataSwitch tools, enterprises can scale their decision-making, automate manual processes, and simplify complex analyses, thereby accelerating time to translate data into real-world insights and build their scalable Cloud Data Platforms.

Furthermore, with experience dealing with the regulated industries DataSwitch in collaboration with ALTR is investing in domain-specific solution use cases to address needs which include business process automation, customer 360, institutional and market risk identification, and fraud detection.

For customers with sensitive data, migration to the cloud can be a little trickier due to privacy regulations. Integrating the DS Migrate toolkit with ALTR helps companies comply with the relevant rules through seamless tokenization of sensitive data as it is migrated, protecting it from risk of theft or leaks. Sensitive datasets of structured, semi-structured, and unstructured data can be tokenized without complex and lengthy software installation. There are no keys to maintain, and no maps to reduce the security of the data. And using ALTR’s cloud platform, tokenized data can be accessed from anywhere you allow. With ALTR, privacy, risk, compliance, data, and security teams work together to govern and automatically control access to sensitive data, simplifying role management, and ensuring data flows to whoever needs it while private information stays protected.

By combining ALTR and DataSwitch, data engineering teams can quickly and easily modernize their data architecture, migrate sensitive data to cloud data platforms safely, and share data with anyone who needs it while ensuring data privacy and security.

To learn more about how DataSwitch and ALTR can help your business, request a demo.

In just seven months since we launched our integration with Snowflake to deliver the market’s first cloud-native platform for observability, governance and protection of sensitive data, we’ve seen tremendous growth. We released innovative product features including automated data usage visualizations, launched the data governance industry’s first and only Free plan available via Snowflake Partner Connect, and seen significant customer momentum. And now we’re pleased to announce that the work we put into developing our Snowflake practice has resulted in ALTR attaining Premier Partner status in the Snowflake Partner Network.

We designed our solution for Snowflake to make it easy for enterprises of all sizes to take full advantage of the Snowflake platform. With our no-code, cloud-based solution, utilizing Snowflake’s native governance and security features, companies can enjoy enterprise-level data control and protection that is easy to implement and easy to manage and maintain, ensuring that valuable sensitive data can be both utilized across the business while staying secure.  

Shared customers take notice

Shared customers like innovative functional nutrition and supplement supplier, HumanN, are reaping the rewards. The consumer goods company utilizes Snowflake as a central database to analyze customer data from a multitude of cloud and on-premises systems. Adding ALTR allowed HumanN to include sensitive customer data into its analytics while ensuring the data is safe and in compliance with privacy regulations.  

“Everything we do at HumanN is driven by our desire to help people—to push harder, to achieve greater and finish stronger,” said Kocher. “We treat our customers’ data with the same respect we treat the humans behind the data. Our adoption of ALTR is another step in that direction.”

Award-winning consumer activation company, Welltok is also seeing the benefits of being an ALTR/Snowflake customer. “Data security and privacy is of the utmost importance to Welltok and our clients, especially when it comes to health information,” said David MacLeod, CIO and CISO for Welltok. “Working with ALTR helps us ensure safety with real-time monitoring and action.”

Get started with the industry’s leading data control and protection solution for Snowflake. Get ALTR Free!

Get Started with ALTR for Free

Last week, ALTR CTO and co-founder James Beecham had a Zoom call with Angelbeat’s Ron Gerber to chat about some data-related themes like the complexity of data, why data is becoming the new endpoint when you think about security, how PII is becoming the new PCI, and the challenges around using and securing sensitive data in your cloud data platform. Those themes and more are featured in the above video by way of preparing for the upcoming Angelbeat Data event in June (more to come on that soon).

Speaking of events, they’re a pretty big deal for us at ALTR; with all the knowledge sharing and networking, they’re mutually beneficial for vendors and attendees alike. From webinars and virtual events to the long-lost in-person trade shows, we consider ourselves fortunate to be able to work with folks like Ron Gerber of Angelbeat, our customers over at Q2, partners at Snowflake, and all the other experts who share their experiences and insight into the data security space.  

In addition to speaking with Ron, James also recently hosted a webinar on “A Security-First Approach to Re-Platforming Data in the Cloud” with Q2’s CAO, Lou Senko, and Snowflake’s Head of Cyber Security, Omer Singer. This webinar not only demonstrates ALTR’s cloud integration with ALTR, but it also provides real use cases from our customer Q2.  

On top of the availability of our Snowflake integration (used by Q2, The Zebra, and HumanN to name a few), we are excited to announce our latest integration with OneTrust. OneTrust unifies data governance under one platform, streamlining business processes and ensuring privacy and security practices are built-in to your data governance program.  

Together, the integration between OneTrust and ALTR further simplifies data governance by automating the enforcement of governance policy. Now organizations can automatically and continuously discover, classify, and govern access to sensitive data. Sign up here to see a live demonstration of this integration on May 12th.  

Along with periodic webinars with industry experts, customers, and partners, we’re also pleased to let you know that we will be participating in Dataversity’s 25th Annual Enterprise Data World event at the end of April as well as RSA’s annual conference in May. As life starts to get closer and closer to normal, we can’t wait to start seeing you all out at in-person conferences later in the year.  

To keep up with all the events going on at ALTR, check out our events page, which is always up-to-date with where you can find us. If you are ready to get started with ALTR, you can try it for free or request a demo.  

ALTR teamed up with IDG to host a discussion entitled Data, Security and Visibility: How to Minimize Risk in a Time of Rapid Business Change. The aim was to share best practices and challenges around:

  • The ongoing struggle between security needs and innovation goal and how the pandemic has added to that tension
  • How remote work has increased risk and security exposures, specifically related to insider threats and credentialed breaches
  • The importance of observability to understand how data is being consumed in order to establish patterns and quickly recognize risky applications and abnormal consumption
  • How to distinguish between security at the device, application, and database level
  • Re-evaluating priorities and making effective decision when it comes to security and data protection

With participants from an array of different industries, job functions, and project priorities, it was interesting to learn about their specific goals and challenges, but in the end it was evident that the group had a lot more in common than we anticipated.

Key takeaways:

1. The business owns and understands the data, making it increasingly more challenging for IT to protect the data.

One participant pointed out that whenever someone needs data within the company, they ask IT. While that seems like a logical place to start, it’s usually the business that actually owns and understands the data. While they usually end up finding it (from the business), it is not an efficient use of time and resources. The first step to solving this problem find a single platform to bridge that gap, providing observability and logging all consumption. This will allow IT to maintain protection of the data while also being able to curate it in a timely manner.  

2. Remote work has all but dissolved the traditional perimeter for any organization or enterprise that still had strong network-based security.

A member of the group shared his story of when the pandemic started and work from home became mandatory. Remote access, which for many organizations might have been a small percentage of their work force, suddenly became the only way that workers were using the organization’s resources. Strong network-focused security postures needed to adjust overnight into more data-centric approaches.

3. Data security is still far too dependent on the infrastructure on which it resides – the cloud has made the problem worse because cloud providers try to provide differentiated toolsets.

This conclusion came out of a discussion around security tools canceling out many of the reasons to leverage the cloud in the first place. Easy to get started, no hardware to install, and the ability to scale quickly are what make the cloud so appealing. So why shouldn’t your security solutions work the same way?

Many newer, more advanced security products are less bound to a specific infrastructure, which means that they can function across hybrid environments and simplify the complex mix of products. This simplification driven by the cloud has become a priority for security leaders.

4. Security clouds are becoming a cost-effective reality with cloud data platforms like Snowflake, but organizations are still overwhelmed with the amount of security data they are collecting.

Cloud data platforms have dramatically improved the speed, efficiency, and flexibility of collecting and analyzing data to power the modern data-driven enterprise. But ease of use and greater access to collected data has presented new challenges in terms of managing data consumption. The modern data ecosystem starts with core applications that create and use massive amounts of data every day. Along the way data is shared, both inbound data from third party sources and outbound data shared with close partners.

By first observing data consumption you can understand how data is being consumed to understand patterns and create baselines. It also reveals high risk applications that you should probably focus on. Once you understand how the data is being consumed can begin to actual govern the consumption. Using this approach, you ensure your data is safe while keeping it accessible for the business to do their jobs.

5. An organization's data security approach must be unique because it’s dependent on the type of data they have and how it needs to be accessed.

One attendee at the round table worked at a design firm that deals in very large files that are sensitive because they contain important intellectual property, while another worked for a large insurance company that deals with large structured databases that contain PII. Every organization has unique need and challenges, but they have all been affected by remote work and now have data traversing the Internet far more than before.

Get the latest from ALTR
Subscribe below to stay up to date with our team, upcoming events, new feature releases, and more.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.