Insights from the Gartner Data & Analytics Summit in London

May 28, 2024

Analytics Data Integration Event

Read in minutes

I had the opportunity of attending the Gartner Data & Analytics Summit in London from May 13th to 15th. This three-day event featured over 100 sessions, many of which ran concurrently, making it impossible to capture everything. However, I would like to share my top takeaways from this insightful conference.

D&A generate value

It’s usually very difficult to evaluate the return of the governance and management of the D&A. Gartner made a lot of studies and polls to bring some concrete evidence.

Studies show a good D&A maturity impact positively the financial performance of firms by 30%.

Governance is a keystone of D&A maturity, but its return is highly under valuated.

Firms should focus on business outcomes rather than ROI and prioritize Execution over strategy.

Evaluating the return on governance and management of Data & Analytics (D&A) is often challenging. Recent studies and polls conducted by Gartner provide some evidence to help our efforts.

A a strong D&A maturity can boost a firm’s financial performance by 30%.

Governance is a critical cornerstone of this D&A maturity, yet its value is frequently underestimated. To harness the full potential of our D&A initiatives, it’s essential that we shift our focus from traditional ROI metrics to broader business outcomes.

Moreover, prioritizing execution over strategy can drive more tangible results. By emphasizing practical implementation and operational excellence, we can ensure that our D&A governance delivers maximum value to the organization.

AI and GenAI – the elephant in the room

Everyone is talking about AI and GenAI, and it’s widely accepted that GenAI represents a disruption on par with the creation of the internet itself. The pressing question is: how can we harness this disruption to meet our own needs?

Studies show that firms which have designated AI as a strategic priority have outperformed their peers by 80% over the past nine years. This highlights the transformative potential of AI when integrated into a company’s core strategy.

I could hear on many sessions the “AI-ready Data” concept. Mainly compared with Analytics data, above quality, compliance, and accessibility, AI require more metadata, context, labeling. This can only be achieved through mature governance practices.

Learning & collective intelligence

In today’s fast-paced and data-rich environment, strong centralized models struggle to keep up with the volume of data and the speed at which decisions need to be made. The concept of collective intelligence: distributing decision-making power to local groups rather than centralizing it with top management.

This approach can be effectively implemented by focusing on several key areas:

Access to the Right Data: Ensure that all team members have access to the relevant and necessary data. This empowers local groups to make informed decisions quickly and accurately.

Sense of Purpose: Clearly communicate the organization’s vision and goals. When teams understand the bigger picture and their role within it, they are more motivated and aligned with the company’s objectives.

Autonomy: Granting teams the autonomy to make decisions fosters innovation and responsiveness. Local groups are often closer to the issues at hand and can act more swiftly than a centralized authority.

Literacy: Invest in training programs that enhance data and AI literacy skills. Equipping teams with the knowledge to understand and leverage data effectively is crucial for informed decision-making.

Data Fabric and Data Mesh

During the summit, two key architectural approaches were frequently discussed: Data Fabric and Data Mesh. While sometimes viewed as opposing strategies, they can also be seen as complementary.

Data Fabric focuses on leveraging the extended metadata provided by our existing platforms. Its primary goal is to “activate” this metadata to facilitate automated enhancements and suggestions.

Data Mesh, on the other hand, decentralizes data delivery and empowers business-driven D&A initiatives. Its core principles include treating Data as a Product.

Combining these approaches can lead to a scalable, flexible data architecture.

Data Fabric’s automation capabilities can enhance the efficiency of decentralized data management within a Data Mesh framework.

The extended metadata in Data Fabric can support the productization efforts of Data Mesh, ensuring that data products are enriched with comprehensive metadata.

Data and Analytics Governance Is Key to Your Success

As final takeaway as shown in the previous paragraph, Governance was a central theme in many of the sessions I attended. The modern approach to governance emphasizes a federated or cooperative model rather than a traditional centralized one. This approach aligns more closely with our business strategy and desired outcomes, rather than focusing solely on the data itself.

Governance should be driven by the business strategy and desired outcomes. By focusing on what we aim to achieve as an organization, we can ensure that our governance efforts are relevant and impactful.

Author: Xavier GROSFILS, COO at AKABI

SHARE ON :


Related articles

May 28, 2024

Read in minutes

Enhancing Real-Time Data Processing with Databricks: Apache Kafka vs. Apache Pulsar

In the era of big data, real-time data processing is essential for organizations seeking immediate insights and the ability to respond swiftly to changing marke...

March 25, 2024

Read in minutes

Getting started with the new Power BI Visual Calculations feature!

Power BI’s latest feature release, Visual Calculations, represents a paradigm shift in how users interact with data.      Rolled ...

February 20, 2024

Read in 5 minutes

Revolutionizing Data Engineering: The Power of Databricks’ Delta Live Tables and Unity Catalog

Databricks has emerged as a pivotal platform in the data engineering landscape, offering a comprehensive suite of tools designed to tackle the complexities of d...


comments

Enhancing Real-Time Data Processing with Databricks: Apache Kafka vs. Apache Pulsar

May 28, 2024

Analytics Data Integration Microsoft Azure

Read in minutes

In the era of big data, real-time data processing is essential for organizations seeking immediate insights and the ability to respond swiftly to changing market conditions. Apache Kafka and Apache Pulsar are two of the most popular platforms for managing streaming data. Their integration with Databricks, a powerful analytics platform built on Apache Spark, enhances these capabilities, providing robust solutions for real-time data management. This article explores the features of Kafka and Pulsar, compares their strengths, and provides guidance on which to choose based on specific use cases.

Apache Kafka: A Standard in Data Streaming

Apache Kafka is a distributed streaming platform originally developed by LinkedIn and later donated to the Apache Software Foundation. Kafka’s architecture is based on a distributed log, where data is written to “topics” divided into partitions.

Topics act as categories for data streams, while partitions are the individual logs that store records sequentially. This division allows Kafka to scale horizontally, enabling high throughput and parallel processing. Partitions also ensure fault tolerance by replicating data across multiple brokers, which maintains data integrity and availability.

Kafka excels in scenarios that require rapid ingestion and real-time processing of large volumes of data. Its ecosystem includes Kafka Streams for stream processing, Kafka Connect for integrating various data sources, and KSQL for querying data streams with SQL-like syntax. These features make Kafka ideal for applications such as monitoring, log aggregation, and real-time analytics.

Key Features of Kafka:

  • High Throughput and Low Latency: Capable of handling millions of messages per second with minimal delay, making it suitable for applications that require quick data processing.
  • Durable Storage: Messages can be stored for a configurable retention period, allowing for replay and historical analysis.
  • Mature Ecosystem: Includes robust tools for stream processing, data integration, and real-time querying.

Apache Pulsar: The Next Generation of Streaming

Apache Pulsar is a distributed messaging and streaming platform developed by Yahoo and now managed by the Apache Software Foundation. Pulsar’s architecture separates message delivery from storage using a two-tier system comprising Brokers and BookKeeper nodes. This design enhances flexibility and scalability.

Brokers handle the reception and delivery of messages, while BookKeeper nodes manage persistent storage. ZooKeeper plays a crucial role in this architecture by coordinating the metadata and configuration management. This separation allows Pulsar to scale storage independently from message handling, providing efficient resource management and improved fault tolerance. Brokers ensure smooth data flow, BookKeeper nodes ensure data durability, and ZooKeeper maintains system coordination and consistency.

Pulsar supports advanced features such as multi-tenancy, geographic replication, and transaction support. Its multi-tenant capabilities allow multiple teams to share the same infrastructure without interference, making Pulsar suitable for complex, large-scale applications. Additionally, Pulsar supports various APIs and protocols, facilitating seamless integration with different systems.

Key Features of Pulsar:

  • Multi-Tenancy: Supports multiple tenants with resource isolation and quotas, providing efficient resource management.
  • Advanced Features: Includes geographic replication for data availability across data centers and transaction support for consistent message delivery.
  • Flexible Integrations: Supports various APIs and protocols, enabling easy integration with different systems.

Comparing Apache Kafka and Apache Pulsar

While both Kafka and Pulsar are designed for real-time data streaming, they have distinct characteristics that may make one more suitable than the other depending on specific use cases.

Performance and Scalability: Kafka is known for its high throughput and low latency, making it ideal for applications requiring rapid data ingestion and processing. It is well-suited for high-performance use cases where low latency is critical. Pulsar, on the other hand, offers similar performance levels but excels in scenarios requiring multi-tenancy and seamless scaling. Its architecture separating compute and storage makes Pulsar preferable for applications needing flexible scaling and multi-tenant support.

Architecture and Flexibility: Kafka uses a simpler, monolithic architecture which can be easier to deploy and manage for straightforward use cases. This simplicity can be advantageous for quick and efficient setup. In contrast, Pulsar’s two-tier architecture provides more flexibility, especially for applications requiring geographic replication and fine-grained resource management. Pulsar is better suited for complex architectures needing advanced features.

Feature Set: Kafka’s extensive ecosystem, including tools like Kafka Streams, Kafka Connect, and KSQL, makes it a comprehensive solution for stream processing and real-time querying. This makes Kafka ideal for use cases that leverage its mature set of tools. Pulsar includes advanced features like native multi-tenancy, message replication across data centers, and built-in transaction support. These features make Pulsar preferable for applications requiring sophisticated capabilities.

Community and Ecosystem: Kafka has a larger and more mature ecosystem with widespread adoption across various industries, making it a safer bet for long-term projects needing extensive community support. Pulsar, while rapidly growing, offers cutting-edge features particularly appealing for cloud-native and multi-cloud environments. Pulsar is more appropriate for modern, cloud-native applications.

Integration with Databricks

Databricks, built on Apache Spark, leverages both Kafka and Pulsar to provide powerful and scalable real-time data processing capabilities. Here’s how these integrations enhance Databricks:

Databricks offers built-in connectors for reading and writing data directly from & to Kafka, enabling users to build real-time data pipelines using Spark Structured Streaming. This facilitates the transformation and analysis of data streams in real-time.

Similarly, Databricks supports Apache Pulsar, allowing for real-time data streaming with exactly-once processing semantics. Pulsar’s features such as geographic replication and transaction support enhance the resilience and reliability of streaming applications on Databricks.

Benefits of Integration

Integrating Kafka and Pulsar with Databricks provides several benefits. The scalability of both platforms allows for handling large volumes of real-time data without compromising performance. Pulsar’s multi-tenant capabilities and Kafka’s extensive features provide flexible integration tailored to specific business needs. Databricks also offers robust tools for access management and data governance, enhancing the security and reliability of streaming solutions.

Conclusion

Integrating Kafka and Pulsar with Databricks allows organizations to leverage leading streaming technologies to build efficient and scalable real-time data pipelines. By combining the power of Spark with Kafka’s resilience and Pulsar’s flexibility, Databricks provides a robust platform to meet the growing needs of real-time data processing.

For high-speed, low-latency applications, Kafka is the preferred choice. For complex, multi-tenant environments requiring advanced features like geographic replication and transaction support, Pulsar is more suitable.

Author: Pierre-Yves RICHER, Data Engineering Practice Leader at AKABI

SHARE ON :


Related articles

May 28, 2024

Read in minutes

Insights from the Gartner Data & Analytics Summit in London

I had the opportunity of attending the Gartner Data & Analytics Summit in London from May 13th to 15th. This three-day event featured over 100 sessions, man...

March 25, 2024

Read in minutes

Getting started with the new Power BI Visual Calculations feature!

Power BI’s latest feature release, Visual Calculations, represents a paradigm shift in how users interact with data.      Rolled ...

February 20, 2024

Read in 5 minutes

Revolutionizing Data Engineering: The Power of Databricks’ Delta Live Tables and Unity Catalog

Databricks has emerged as a pivotal platform in the data engineering landscape, offering a comprehensive suite of tools designed to tackle the complexities of d...


comments

Revolutionizing Data Engineering: The Power of Databricks’ Delta Live Tables and Unity Catalog

February 20, 2024

Business Inteligence Data Integration Microsoft Azure

Read in 5 minutes

Databricks has emerged as a pivotal platform in the data engineering landscape, offering a comprehensive suite of tools designed to tackle the complexities of data processing, analytics, and machine learning at scale. Among its innovative offerings, Delta Live Tables (DLT) and Unity Catalog stand out as transformative features that significantly enhance the efficiency and reliability of data pipelines. This article delves into these concepts, elucidating their functionalities, benefits, and their particular relevance to data engineers.

Delta Live Tables (DLT): Revolutionizing Data Pipelines

Delta Live Tables is an ETL framework built on top of Databricks, designed to streamline the development and maintenance of data pipelines. With DLT, data engineers can define declarative pipelines that automatically manage complex data transformations, dependencies, and error handling. This high-level abstraction allows engineers to focus on business logic and data transformations rather than the operational complexities of pipeline orchestration.


Key Features and Advantages:

  • Declarative Syntax: DLT allows data engineers to define transformations using SQL or Python, specifying what the data should look like rather than how to achieve it. This declarative approach simplifies pipeline development and maintenance.
  • Automated Error Handling: DLT provides robust error handling mechanisms, including automatic retries, dead-letter queues for unprocessable messages, and detailed error logging. This reduces the time data engineers spend on debugging and fixing pipeline issues.
  • Data Quality Controls: With DLT, data engineers can embed data quality checks directly into their pipelines, ensuring that data meets specified quality constraints before it moves downstream. This built-in validation mechanism enhances data reliability and trustworthiness.
  • Live Tables: DLT continuously monitors for new data and incrementally updates its outputs, ensuring that downstream users and applications always have access to fresh, high-quality data. This real-time processing capability is crucial for time-sensitive analytics and decision-making.
  • Change Data Capture (CDC): DLT supports the capture of changes made to source data, enabling seamless and efficient integration of updates into data pipelines. This feature ensures that data reflects the latest changes, crucial for accurate analytics and real-time reporting.
  • Historical and Live Views: Data engineers can create views that either maintain a history of data changes or display the most current data. This allows users to access data snapshots over time or see the present state of data, thereby facilitating thorough analysis and informed decision-making.

Unity Catalog: Centralizing Data Governance

Unity Catalog enhances Databricks by introducing a unified governance framework for all data and AI assets in the Lakehouse, centralizing metadata management, access control, and auditing to streamline data governance and security at scale.

A data catalog acts as an organized inventory for an organization’s data assets, providing metadata, usage, and source information to facilitate data discovery and management. Unity Catalog realizes this by integrating with the Databricks Lakehouse, offering not just a cataloging function but also a unified approach to governance. This ensures consistent security policies, simplifies data access management, and supports comprehensive auditing, helping organizations navigate their data landscape more efficiently and in compliance with regulatory requirements.

Key Features and Advantages:

  • Unified Metadata Management: Unity Catalog consolidates metadata across various data assets, including tables, files, and machine learning models, providing a single source of truth for data governance.
  • Fine-grained Access Control: With Unity Catalog, data engineers can define precise access controls at the column, row, and table levels, ensuring that sensitive data is adequately protected and compliance requirements are met.
  • Cross-Service Policy Enforcement: Unity Catalog applies consistent governance policies across different Databricks workspaces and services, ensuring uniform security and compliance posture across the data landscape.
  • Data Discovery and Lineage: It facilitates easy discovery of data assets and provides comprehensive lineage information, enabling data engineers to understand data origins, transformations, and dependencies. This transparency is vital for troubleshooting, impact analysis, and compliance auditing.
  • Auditing: This feature tracks data interactions, offering insights into user activities and changes within the Databricks environment. This facilitates compliance and security by providing a detailed audit trail for accountability and analysis.

Integration: Synergy Between DLT and Unity Catalog

The integration of Delta Live Tables and Unity Catalog within Databricks provides a cohesive and powerful environment for data engineering. DLT’s streamlined pipeline management, combined with Unity Catalog’s robust governance framework, offers a comprehensive solution for building, managing, and securing data pipelines at scale.

  • Enhanced Data Reliability: DLT’s real-time processing and data quality checks, coupled with Unity Catalog’s governance capabilities, ensure that data pipelines produce accurate, reliable, and compliant data outputs.
  • Increased Productivity: The declarative nature of DLT and the centralized governance of Unity Catalog reduce the complexity and overhead associated with data pipeline development and management, allowing data engineers to focus on delivering value.
  • Scalability and Flexibility: Both DLT and Unity Catalog are designed to scale with the needs of the business, accommodating large volumes of data and complex data transformations without sacrificing performance or manageability.

Conclusion: Empowering Data Engineers

For data engineers, the combination of Delta Live Tables and Unity Catalog within Databricks represents a significant leap forward in terms of productivity, data quality, and governance. By abstracting away the complexities of pipeline development and data management, these features allow engineers to concentrate on solving business problems through data. The result is a more efficient, reliable, and secure data infrastructure that can drive insights and innovation at scale. As the data landscape continues to evolve, tools like DLT and Unity Catalog will be indispensable in empowering data engineers to meet the challenges of tomorrow.

It’s important to note that, although Delta Live Tables (DLT) and Unity Catalog are designed to work together seamlessly within the Databricks environment, it’s perfectly viable to pair DLT with a different data cataloging system. This versatility allows organizations to take advantage of DLT’s sophisticated capabilities for automating and managing data pipelines while still utilizing another data catalog that may align more closely with their existing infrastructure or specific needs. Databricks supports this flexible data management strategy, enabling businesses to leverage DLT’s real-time processing and data quality enhancements without being restricted to using only Unity Catalog.

As we explore the horizon of technological innovation, it’s evident that the future is unfolding before us. Engaging with the latest advancements in data management and governance is more than just keeping pace; it’s about seizing the opportunity to redefine how we interact with the vast universe of data. The moment has come to embrace these new possibilities, leveraging their power to drive forward our data-centric initiatives.

Author: Pierre-Yves RICHER, Data Engineering Practice Leader at AKABI

SHARE ON :


Related articles

May 28, 2024

Read in minutes

Insights from the Gartner Data & Analytics Summit in London

I had the opportunity of attending the Gartner Data & Analytics Summit in London from May 13th to 15th. This three-day event featured over 100 sessions, man...

May 28, 2024

Read in minutes

Enhancing Real-Time Data Processing with Databricks: Apache Kafka vs. Apache Pulsar

In the era of big data, real-time data processing is essential for organizations seeking immediate insights and the ability to respond swiftly to changing marke...

March 25, 2024

Read in minutes

Getting started with the new Power BI Visual Calculations feature!

Power BI’s latest feature release, Visual Calculations, represents a paradigm shift in how users interact with data.      Rolled ...


comments

AKABI’s Consultants Share Insights from Dataminds Connect 2023

November 20, 2023

Analytics Business Inteligence Data Integration Event Microsoft Azure + 1

Read in 5 minutes

Dataminds Connect 2023, a two-day event taking place in the charming city of Mechelen, Belgium, has proven to be a cornerstone in the world of IT and Microsoft data platform enthusiasts. Partly sponsored by AKABI, this event is a gathering of professionals and experts who share their knowledge and insights in the world of data.

With a special focus on the Microsoft Data Platform, Dataminds Connect has become a renowned destination for those seeking the latest advancements and best practices in the world of data. We were privileged to have some of our consultants attend this exceptional event and we’re delighted to share their valuable feedback and takeaways.

How to Avoid Data Silos – Reid Havens

In his presentation, Reid Havens emphasized the importance of avoiding data silos in self-service analytics. He stressed the need for providing end users with properly documented datasets, making usability a top priority. He suggested using Tabular Editor to hide fields or make them private to prevent advanced users from accessing data not meant for self-made reports. Havens’ insights provided a practical guide to maintaining data integrity and accessibility within the organization.

Context Transition in DAX – Nico Jacobs

Nico Jacobs took on the complex challenge of explaining the concept of “context” and circular dependencies within DAX. He highlighted that while anyone can work with DAX, not everyone can understand its reasoning. Jacobs’ well-structured presentation made it clear how context influences DAX and its powerful capabilities. Attendees left the session with a deeper understanding of this essential language.

Data Modeling for Experts with Power BI – Marc Lelijveld

Marc Lelijveld’s expertise in data modeling was on full display as he delved into various data architecture choices within Power BI. He effortlessly navigated topics such as cache, automatic and manual refresh, Import and Dual modes, Direct Lake, Live Connection, and Wholesale. Lelijveld’s ability to simplify complex concepts made it easier for professionals to approach new datasets with confidence.

Breaking the Language Barrier in Power BI – Daan Lambrechts

Daan Lambrechts addressed the challenge of multilingual reporting in Power BI. While the tool may not inherently support multilingual reporting, Lambrechts showcased how to implement dynamic translation mechanisms within Power BI reports using a combination of Power BI features and external tools like Metadata Translator. His practical, step-by-step live demo left the audience with a clear understanding of how to meet the common requirement of multilingual reporting for international and multilingual companies.

Lessons Learned: Governance and Adoption for Power BI – Paulien van Eijk & Teske van Maaren

This enlightening session focused on the (re)governance and (re)adoption of Power BI within organizations where Power BI is already in use, often with limited governance and adoption. Paulien van Eijk and Teske van Maaren explored various paths to success and highlighted key concepts to consider:

  • Practices: Clear and transparent guidance and control on what actions are permitted, why, and how.
  • Content Ownership: Managing and owning the content in Power BI.
  • Enablement: Empowering users to leverage Power BI for data-driven decisions.
  • Help and Support: Establishing a support system with training, various levels of support, and community.

Power BI Hidden Gems – Adam Saxton & Patrick Leblanc

Participating in Adam Saxton and Patrick Leblanc’s “Power BI Hidden Gems” conference was a truly enlightening experience. These YouTube experts presented topics like Query folding, Prefer Dual to Import mode, Model properties (discourage implicit measures), Semantic link, Deneb, and Incremental refresh in a clear and engaging manner. Their presentation style made even the most intricate aspects of Power BI accessible and easy to grasp. The quality of the presentation, a hallmark of experienced YouTubers, made the learning experience both enjoyable and informative.

The Combined Power of Microsoft Fabric for Data Engineer, Data Analyst and Data Governance Manager – Ioana Bouariu, Emilie Rønning and Marthe Moengen

I had the opportunity to attend the session entitled “The Combined Power of Microsoft Fabric for Data Engineer, Data Analyst, and Data Governance Manager”. The speakers adeptly showcased the collaborative potential of Microsoft Fabric, illustrating its newfound relevance in our evolving data landscape. The presentation effectively highlighted the seamless collaboration facilitated by Microsoft Fabric among data engineering, analysis, and governance roles. In our environment, where these roles can be embodied by distinct teams or even a single versatile individual, Microsoft Fabric emerges as a unifying force. Its adaptability addresses the needs of diverse profiles, making it an asset for both specialized teams and agile individuals. Its potential promises to open exciting new perspectives for the future of data management.

Behind the Hype, Architecture Trends in Data – Simon Whiteley

I thoroughly enjoyed Simon Whiteley’s seminar on the impact of hype in technology trends. He offered valuable insights into critically evaluating emerging technologies, highlighting their journey from experimentation to maturity through Gartner’s hype curve model.

Simon’s discussion on attitudes towards new ideas, the significance of healthy skepticism, and considerations for risk tolerance was enlightening. The conclusion addressed the irony of consultants cautioning against overselling ideas, emphasizing the importance of skepticism. The section on trade-offs in adopting new technologies provided practical insights, especially in balancing risk and fostering innovation.

In summary, the seminar provided a comprehensive understanding of technology hype, offering practical considerations for navigating the evolving landscape. Simon’s expertise and engaging presentation style made it a highly enriching experience.

In Conclusion

Dataminds Connect 2023 was indeed a remarkable event that provided valuable insights into the world of data. We want to extend our sincere gratitude to the organizers for putting together such an informative and well-executed event. The knowledge and experiences gained here will undoubtedly contribute to our continuous growth and success in the field. We look forward to being part of the next edition and the opportunity to continue learning and sharing our expertise with the data community. See you next year!

Vincent Hermal, Azure Data Analytics Practice Leader
Pierre-Yves Richer, Azure Data Engineering Practice Leader
avec la participation très précieuse de Sophie Opsommer, Ethan Pisvin, Pierre-Yves Outlet et Arno Jeanjot

SHARE ON :


Related articles

May 28, 2024

Read in minutes

Insights from the Gartner Data & Analytics Summit in London

I had the opportunity of attending the Gartner Data & Analytics Summit in London from May 13th to 15th. This three-day event featured over 100 sessions, man...

May 28, 2024

Read in minutes

Enhancing Real-Time Data Processing with Databricks: Apache Kafka vs. Apache Pulsar

In the era of big data, real-time data processing is essential for organizations seeking immediate insights and the ability to respond swiftly to changing marke...

March 25, 2024

Read in minutes

Getting started with the new Power BI Visual Calculations feature!

Power BI’s latest feature release, Visual Calculations, represents a paradigm shift in how users interact with data.      Rolled ...


Purge ODI Logs through Database

May 29, 2020

Business Inteligence Data Integration

Read in 3 minutes

If you generate a lot of logs in ODI, purging through ODI built-in mechanism can be very slow. A lot faster to do it through Database, but you have to respect foreign keys. Here is a sample plsql script to do so.

Here is a simple script with one parameter which is the number of days of log you want to keep, it will there retrieve session number and delete in the logs table following the dependencies.

SHARE ON :


Related articles

May 28, 2024

Read in minutes

Insights from the Gartner Data & Analytics Summit in London

I had the opportunity of attending the Gartner Data & Analytics Summit in London from May 13th to 15th. This three-day event featured over 100 sessions, man...

May 28, 2024

Read in minutes

Enhancing Real-Time Data Processing with Databricks: Apache Kafka vs. Apache Pulsar

In the era of big data, real-time data processing is essential for organizations seeking immediate insights and the ability to respond swiftly to changing marke...

March 25, 2024

Read in minutes

Getting started with the new Power BI Visual Calculations feature!

Power BI’s latest feature release, Visual Calculations, represents a paradigm shift in how users interact with data.      Rolled ...


comments

Clean ODI Scenario with Groovy

September 27, 2019

Business Inteligence Data Integration

Read in 1 minutes

You may generate a lot of scenarii when developping ODI projet. When promoting, commiting to git… …you are usually only interested in the last functionnal scenario.

All the past being stored in git or promoted, you may like to clear all all scenarii. If yes, this groovy script may help you. It will delete all scenarii but the last version (sorted by version name, take care…).

The code has two parameters, the project code and a pattern for the package. May help you target specific scenario.

//Imports core
import oracle.odi.core.persistence.transaction.ITransactionDefinition;
import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition;
import oracle.odi.core.persistence.transaction.ITransactionManager;
import oracle.odi.core.persistence.transaction.ITransactionStatus;

//Imports odi Objects
import oracle.odi.domain.project.OdiPackage;
import oracle.odi.domain.runtime.scenario.OdiScenario;
import oracle.odi.domain.project.finder.IOdiPackageFinder;
import oracle.odi.domain.runtime.scenario.finder.IOdiScenarioFinder;


// Parameters -- TO FILL --
String sourceProjectCode = 'MY_PROJECT_CODE';
String sourcePackageRegexPattern = '*';


println "    Start Scenarios Deletion";
println "-------------------------------------";

//Setup Transaction
ITransactionDefinition txnDef = new DefaultTransactionDefinition();
ITransactionManager tm = odiInstance.getTransactionManager();
ITransactionStatus txnStatus = tm.getTransaction(txnDef);

int scenarioDeletedCounter = 0;

try {
  //Init Scenario Finder
  IOdiScenarioFinder odiScenarioFinder = (IOdiScenarioFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiScenario.class);
  //Loops through all packages in target project/fodlers
  for (OdiPackage odiPackageItem : ((IOdiPackageFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiPackage.class)).findByProject(sourceProjectCode)){
    // Only generate Scenario for package matching pattenr
    if (!odiPackageItem.getName().matches(sourcePackageRegexPattern)) {
      continue;    
    }
    println "Deleting Scenarii for Package " + odiPackageItem.getName();
    
    odiScenCollection = odiScenarioFinder.findBySourcePackage(odiPackageItem.getInternalId());
    maxOdiScen = odiScenCollection.max{it.getVersion()};
    if (maxOdiScen != null) {
      for (OdiScenario odiscen : odiScenCollection ) {
        if (odiscen != maxOdiScen){
          println "Deleting Scenari "+ odiscen.getName() + " " + odiscen.getVersion();
          odiInstance.getTransactionalEntityManager().remove(odiscen);
          scenarioDeletedCounter ++;
        }
      }
    }
 }   
// Commit transaction
tm.commit(txnStatus);


println "---------------------------------------------------";
println "     " + scenarioDeletedCounter + " Scenarios deleted Sccessfully";
println "---------------------------------------------------";

} 
catch (Exception e)
{
  // Print Execption
  println "---------------------ERROR-------------------------";
  println(e);
  println "---------------------------------------------------";
  println "     FAILURE : Scenarios Deletion failed";
  println "---------------------------------------------------";
}

SHARE ON :


Related articles

May 28, 2024

Read in minutes

Insights from the Gartner Data & Analytics Summit in London

I had the opportunity of attending the Gartner Data & Analytics Summit in London from May 13th to 15th. This three-day event featured over 100 sessions, man...

May 28, 2024

Read in minutes

Enhancing Real-Time Data Processing with Databricks: Apache Kafka vs. Apache Pulsar

In the era of big data, real-time data processing is essential for organizations seeking immediate insights and the ability to respond swiftly to changing marke...

March 25, 2024

Read in minutes

Getting started with the new Power BI Visual Calculations feature!

Power BI’s latest feature release, Visual Calculations, represents a paradigm shift in how users interact with data.      Rolled ...


comments