MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
data
Recherche

What is data fabric? How it offers a unified view of your data

vendredi 11 avril 2025, 16:00 , par InfoWorld
What is data fabric?

Data fabric is a type of architecture that aims to provide unified access to the data stored in various places across your organization. The data fabric concept recognizes that most enterprises aren’t able or willing to consolidate every department’s valuable data into one huge data lake.  

A data fabric instead serves as an abstraction layer that interacts with individual data silos, weaving together important information stored in everything from massive traditional RDBMSes to small departmental NoSQL databases. The goal is to automate data discovery and hide the details of CRUD (create, read, update, and delete) transactions from the user so they can treat your company data as one big store of information. 

This is, as you can imagine, easier said than done, but valuable if you can pull it off. The term data fabric was coined in the early 2000s by an analyst at Forrester, but the folks at rival consultancy Gartner have been the ones pushing the idea of this architecture as a distinct category. The concept isn’t fully formed yet — there isn’t universal agreement on what data fabric architecture looks like, for instance, and vendor offerings billied as data fabric don’t all do the same things. In a world where organizations are trying to extract as much value as they can from their data, but despair of ever fully rationalizing their data storage situation, the idea of a fabric that can weave together data from various organizational silos is attractive. 

Data fabric vs. a data fabric
People generally use data fabric (without “a” or “the”) when talking about the general concept we’re discussing here, sort of the way that people talk about big data: “We’re looking into data fabric to break down organizational silos,” for instance. But if you’re talking about a specific implementation, you’re more likely to use an article: “We hired SAP to build a data fabric for the company.”



Data fabric architecture: Key components 

Broadly speaking, data fabric consists of two parts: an app or web-based front end, where users can see and configure the various sources of data, and the systems on which they reside. From this front end, users can create data models and see all their organization’s data.  

The front end is interacting with a back-end engine (or engines) that power the data connection under the hood. These engines automatically keep track of the connections to data sources and available storage, sync and tune data, and so on. 

Usually when people talk about data fabric architecture, they’re talking about the back end, and the components necessary to make that magic happen. As noted, there’s no single universally accepted structure for a data fabric architecture. You can check out several different takes on the subject — IBM outlines Forrester’s definition, SAP has its own ideas, and Qlik, another vendor, offers a different version.   

However, there are several components that these architectures have in common and you can consider them key to any data fabric architecture: 

Data ingestion and connectivity. This layer ensures data from various sources and silos is brought into the fabric, using multiple integration patterns (data pipelines, streaming, data virtualization, and so on). 

Data processing and orchestration. This layer refines, transforms, and integrates data while automating workflows for efficiency and scalability. 

Data semantics and discovery. This layer creates a shared understanding of data across the enterprise by defining relationships, terminology, and context. 

Data management and governance. This layer ensures data is secure, well-governed, and of high quality, with strong metadata management to provide context. Ideally, the layer can make use of AI/ML-driven metadata activation, enabling automated governance, integration, and intelligent recommendations. 

Data access and consumption. This layer ensures that the right users and systems can access the data they need, via dashboards, APIs, analytics tools, and compliance-based permissions. 

Why is data fabric important?

InfoWorld’s Isaac Sacolick provides a deep dive into how you know if your organization needs a data fabric. He cites three indicators that data fabric can help you: 

Your data is siloed and fragmented 

You need real-time analytics for immediate decision-making

You’re aiming to enable generative AI and empower self-service analytics for business users 

Data fabric can solve these problems by providing an abstraction layer that provides the capability to immediately access, analyze, and process your organization’s data, wherever it lives. The issues that data fabric architectures aim to solve are not new: People have been trying to figure out how to get all an organization’s data under one umbrella for literally decades. The big advantage of data fabric — what makes it special — is that you don’t have to move your data into some centralized repository, and you don’t have to convince individual groups within your organization to change too much about the way they deal with data. In theory, a data fabric provides the benefits of a unified data set without the pain of creating one. 

AI will drive data fabric adoption
IDC’s FutureScape: Worldwide Future of Operations 2025 Predictions estimates that by 2026, 80% of the top 500 industrial enterprises will have data fabric capabilities in place to support adoption of AI-driven use cases that require multiple data sets.

Risks of data fabric 

Data fabric shares a major risk with all new technologies that promise to cut a gordian knot that’s been bedeviling the industry for years: It may get your hopes up too much. Much of data fabric’s promises rely on its capability to discover, tag, and classify data in your various heterogenous silos automatically. “Think Google Search for your data,” gushes Datafabric.com, a site set up by data fabric vendor Promethium.  

But most IT veterans know that these tools don’t always live up to their promises. Implementing a data fabric in your organization may prove overly complex, and when you finally get things up and running, you may find that the level of data integration isn’t what you hoped. You may find yourself facing the task of manually cleaning up or consolidating some of your data, which you were probably hoping to avoid. 

And if your data fabric rollout is a success, that can lead to another problem: data security. If you have easy access to all your data across various clouds and silos, then so does any attacker who manages to gain access to the system where your data fabric front end runs. You need to ensure that everything is locked down so that your data fabric doesn’t offer an easy front door to those looking to access sensitive organizational information.  

Data fabric use cases 

IBM — a major data fabric vendor — published a case study on banks using data fabric to better understand, serve, and sell to their customers. Customer service is in general one of the big use cases for data fabric, as many companies find that data on their customers is siloed across different departments, making it difficult for them to get the big picture and insights that they believe lurk in a more unified data view. 

Other big use cases for data fabric include the following: 

Healthcare, where they it can consolidate a patient’s electronic health records from multiple sources   

Manufacturing and warehousing, where it can provide data for analysis across an entire supply chain and ingest information from IoT devices 

Retail, where it can enable customer tracking and personalization across channels 

Data fabric implementation steps 

A technical guide to implementing a data fabric at your organization is beyond the scope of this article. But we can offer-some big-picture steps you should take as you plan your data fabric rollout and move it forward. 

Pre-implementation

Assess your current data landscape. You need to have a handle on the different places where your data lives and the forms in which it’s stored before you can start planning to connect it at all. 

Understand the business requirements. Narrowing the focus of what your organization expects from data mesh can help you get a handle on the scope of the project. 

Establish strong data governance and security policies. You probably already had these, right? Well, if you didn’t, now’s a great time to lay down the law when it comes to company data. 

Choose a cross-functional data team. These could include data engineers, analysts, scientists, and stewards from all the departments that will be affected by the rollout.  

Implementation and beyond 

Roll things out in phases. Start with your most critical use cases and then expand the scope for your data fabric from there. This can give you the chance to learn how data needs to be prepared from earlier rollout phases. 

Train your end users. Your users will get on board if they understand how they can make use of data from across the enterprise — but they’ll need help understanding how to do that. Pick enthusiastic volunteers to be the leaders on their teams. 

Monitor and optimize. Keep constant track of what’s going right — and wrong —with your data fabric architecture, and with the benefits data fabric is delivering to your organization. 

Top data fabric vendors 

There are plenty of data fabric vendors, both big companies and specialized vendors. Review aggregator site G2 currently ranks these as the top 5: 

Denodo 

IBM Cloud Pak for Data 

Tibco Data Fabric 

Google Dataplex 

SAP Datasphere 

But there are many others in this space, and things are moving fast as vendors embrace new machine learning algorithms for data processing.

Data fabric vs. data mesh vs. data virtualization: What’s the difference?
A data mesh might sound like the same thing as a data fabric, but they’re quite different. A data mesh is an organizational concept, which defines data as a product owned by the departments within a company that collect and control it; these departments make that data available to one another via APIs.
Data virtualization, another related term, is a concept that underlies data fabrics: it’s an approach to data management that allows applications to access data without needing to know where it lives or in what format.
https://www.infoworld.com/article/3958517/what-is-data-fabric-how-it-offers-a-unified-view-of-your-d...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
lun. 14 avril - 21:42 CEST