TL;DR
- Visual + code is a paradigm where a visual editor and a code editor produce the same governed artifact, not two parallel representations.
- Every visual action generates real, standard, executable code underneath, which separates it from legacy drag-and-drop Extract, Transform, Load (ETL) and no-code tools.
- Most analytics professionals spend the majority of their time maintaining data rather than building net-new analytics work.
- Prophecy is an agentic, AI-accelerated data preparation platform that brings visual + code to analytics teams through AI agents, visual workflows, and cloud-native execution.
- The right implementation expands who can build analytics workflows without compromising what gets built.
Visual + code has become one of the most widely used phrases in data tooling. Vendors use it, analyst firms reference it, and platform teams hear it from leadership. Ask five people what it actually means, and you'll get five different answers. Some conflate it with drag-and-drop Extract, Transform, Load (ETL) tools from a decade ago. Others dismiss it as "low-code" with a fresh coat of paint.
That confusion has real consequences. The architecture behind visual + code determines who can build analytics data workflows (sometimes also referred to as analytics data pipelines), how those workflows are governed, and whether your data platform team will approve the tool or block it at the door. Analytics data workflows cover the data preparation and transformation work that analytics teams do after data lands in a cloud data platform like Databricks, Snowflake, or BigQuery.
Our view is straightforward. The only implementations worth adopting are those where the visual layer and the code layer produce the same governed artifact. Anything else creates parallel systems dressed up as modernization.
Three paradigms, side by side
Analytics teams have three architectural options for building analytics data workflows, and the differences carry architectural weight. Here's how they compare:
- Code-only tools: Tools like Apache Airflow and dbt (prior to its Canvas release) require practitioners to write Python or SQL directly. Airflow's documentation describes pipelines as defined in Python, with its web user interface (UI) serving as an operational monitor rather than an authoring surface. These tools are powerful and precise, yet exclusive by design; the dbt team has openly acknowledged that contributors are precluded without SQL expertise.
- Graphical user interface (GUI) only tools: These sit at the other extreme. One typical framing describes no-code as a completely hands-off approach, with 100% dependence on visual tools. Proprietary formats store the configuration, and engineers can't inspect, extend, or version-control standard code.
- Visual + code: The third paradigm is the distinction that actually matters. A graphical interface is one authoring surface, and every action generates real, standard, executable code underneath. The code matches what an engineer would write by hand, without proprietary configuration blobs or approximations.
The architectural property that defines visual + code
Visual + code hinges on a single technical property: the visual layer and the code layer produce the same artifact. The visual interface itself takes a back seat.
Dbt Canvas describes this precisely. Visual models compile to SQL and are indistinguishable from other dbt models, and the word "indistinguishable" carries the architectural claim. Coverage of similar designers makes the same case, noting that every workflow built visually creates the same declarative artifact under the hood, so engineers can review and improve without switching tools or rewriting logic.
This property separates visual + code from old-school drag-and-drop ETL. When the visual and code representations share the same artifact, every governance mechanism applies uniformly, regardless of who built the workflow or how. That includes version control, testing, lineage, and access controls.
Data work is a team sport
Visual + code doesn't collapse the boundary between teams; it makes collaboration across that boundary possible. Being explicit about who does what helps clarify the conversation. Responsibilities are typically split like this:
- Data engineers own the foundation: They build and maintain ETL pipelines, manage ingestion, and enforce data governance. They prepare and curate data inside cloud data platforms like Databricks, Snowflake, or BigQuery.
- Analytics teams turn governed data into insights: They build analytics data workflows, perform the transformation needed for analysis, run ad hoc queries, and prepare datasets that feed Business Intelligence (BI) tools.
ETL pipelines remain the primary way data enters the platform, and analytics workflows pick up where ETL leaves off. Conflating the two is where most "self-service" conversations go sideways.
Why this matters right now
Analytics teams are overwhelmed, and the gap between demand and capacity continues to grow. That's the context that makes visual + code more than an academic distinction. A few data points set the scene:
- Maintenance dominates the day: Some analytics professionals spend most of their time maintaining or organizing datasets, and that figure hasn't budged from the prior year despite widespread AI adoption.
- Demand keeps accelerating: ETL skill demand has seen a huge growth in demand, and enterprise analytics teams now work across hundreds of data sources.
- AI alone hasn't fixed it: Even with 70% of analytics professionals using AI to assist in code development, the maintenance proportion hasn't decreased.
Analytics workflow requests continue to pull attention toward slow, ad hoc work while the business waits with stale or untrusted data. Engineers stay buried in maintenance, and analysts stay stuck in request queues. Visual + code, paired with AI agents, acts as a capacity multiplier for teams that can't hire their way out of a structural supply-demand gap.
What data platform teams actually need to evaluate
The visual interface is the wrong thing to evaluate. If you're the engineer or architect who approves (or rejects) tools for your analytics team, the architectural question that matters is whether the visual interface produces the same artifact as the code interface, or a separate representation that requires conversion.
When two representations exist, governance controls applied to one don't automatically apply to the other. You end up maintaining parallel systems, which is exactly the kind of ungoverned sprawl platform teams want to prevent. The unified artifact model solves this structurally; access controls, lineage, and testing apply to both access modes uniformly because only one underlying object exists.
Analyst firms have caught up as well. The Peer Insights market definition for Integration Platform as a Service now lists low- or no-code tools as a mandatory feature rather than a differentiator. The real decision is which implementation preserves your engineering standards while expanding analyst access.
Bridging the gap without breaking governance
The collaboration problem between analysts and engineers is longstanding.
Visual + code addresses the technical barrier of who can build an analytics workflow, and the organizational barrier requires deliberate design too. That includes deciding who owns the workflow, who enforces standards, and who resolves conflicts. The model emerging across enterprise teams splits responsibilities in layers: analysts work with well-governed data in a self-service manner, while engineers design scalable data models and enforce testing standards. Analysts operate within a testing framework that engineers define and enforce, rather than building workflows outside that framework entirely.
How visual + code complements your BI stack
Visual + code works alongside BI tools rather than competing with them. Tools like Tableau, Power BI, or Looker offer strong visualization and analysis capabilities, and they depend on well-prepared, trustworthy datasets.
Most analysts' frustration with BI tools traces back to waiting for the right data to land in them, rather than the tools themselves. Visual + code platforms sit upstream; analysts prepare and transform datasets in a governed environment, and those datasets then feed dashboards and reports in the BI tool of choice. Reporting still happens where it always has, and the bottleneck moves out of the way.
How Prophecy implements visual + code
Prophecy is an agentic, AI-accelerated data preparation platform that brings the visual + code paradigm to life for analytics teams. Its approach shows up in a few concrete ways:
- Bidirectional editing: The visual editor and code editor link bidirectionally, so changes in either environment reflect in the other. Visual workflows produce open, standard code committed to GitHub, with no proprietary formats and no lock-in.
- Specialized AI agents: AI in Prophecy takes the form of specialized AI agents that generate transformations, suggest joins, document steps, and validate logic. Human review, standardization, and Git retention back these agents, so analytics teams get the speed of AI with the reliability of engineering.
- Hands-on exploration: You can explore the AI agents directly to see how they work inside real visual workflows.
Adoption without rip-and-replace
The biggest blockers to modernization are often political rather than technical. Nobody wants to bet their job on a big-bang migration that rips out working tools in a single cycle. Teams often start with the efficiency use case instead:
- Prove value alongside existing tools: Show analysts and engineers a faster way to build and manage analytics workflows next to what they already have. When workflows ship faster, backlogs shrink, and governance holds, broader adoption follows naturally.
- Migrate with a transpiler: A built-in transpiler accelerates moves from legacy desktop analytics tools onto Databricks, Snowflake, or BigQuery. Platform teams can point to real modernization progress with workflows migrated, adoption climbing, and proof points stacking up for the cloud platform they've already invested in.
Who the evaluation is really for
A Prophecy evaluation doesn't serve a VP slide deck. The people who need to see it are the analysts and application teams who will actually use it day to day, alongside the platform team that has to trust it in production:
- Analysts feel the speed: They see how fast they can prepare data and build visual workflows step-by-step without filing a ticket.
- Platform teams feel the control: Governance, compute, and security stay entirely in their stack (Databricks, Snowflake, or BigQuery) rather than the vendor's.
What visual + code is not
Visual + code is often misunderstood, so clarity about what this paradigm doesn't do helps:
- It doesn't eliminate the need for data engineers: Engineers still own ETL, ingestion, and governance. They define the standards and testing frameworks analysts work within.
- It doesn't mean "anyone can do anything": Role-based access control (RBAC) and governance guardrails still apply. Analysts gain more capability within defined boundaries, rather than unbounded freedom.
- It doesn't replace BI tools: Visualization, dashboards, and reporting still happen in tools like Tableau, Power BI, and Looker. Visual + code prepares the data that those tools consume.
- It doesn't automatically solve organizational problems: Only 51% of data leaders feel their function is well understood, which reflects a culture and structure challenge rather than a tooling one.
What visual + code accomplishes is removing the artificial bottleneck where analysts wait weeks because the only way to express data logic runs through code they don't write. When the visual interface produces the same governed, version-controlled, testable artifact as the code interface, you've expanded who can build without compromising what gets built.
Build governed analytics workflows with Prophecy
Analysts stay stuck in request queues, and engineers stay buried in ad hoc transformation work. The bottleneck won't fix itself through hiring.
Prophecy brings the following to your analytics stack:
- AI agents: Specialized agents help analysts generate transformations, document logic, validate joins, and prepare datasets for analysis, so nobody has to build step-by-step from scratch.
- Visual interface + code: A bidirectional canvas where every visual action produces real, standard code that engineers can review, extend, and version-control in Git.
- Built-in governance: Continuous integration and continuous deployment (CI/CD), versioning, tests, and role-based access control (RBAC) so analytics workflows meet the same production standards as engineering-built ones.
- Cloud-native deployment: Visual workflows run as performant code directly on Databricks, Snowflake, or BigQuery. Compute and governance stay in your stack rather than the vendor's.
See Prophecy in action
HubSpot nurture analysis
A marketer on Prophecy's team analyzed a HubSpot nurture list to understand persona-level engagement. It included one contact dataset, a handful of plain-English prompts, and just one workflow:
- Visual-first analysis: The marketer built persona breakdowns and engagement cohorts from natural-language prompts instead of SQL.
- Code-level audit: He read the generated code behind each transformation, noting he'd "get suspicious" if something like thousands of student contacts disappeared unexpectedly.
- One shared artifact: The dashboard came out of the same workflow that the marketing team and leadership could review, not separate files.
Churn intelligence for a telecom company
A data engineer on Prophecy's team built a five-layer churn intelligence system across customer, churn, and internet usage data. Three CSV sources, one descriptive prompt, one workflow:
- Visual-first analysis: The engineer framed the business problem in a prompt and got back five stacked stages like quality checks, churn personas, Customer 360, retention simulation, and revenue exposure. These were rendered as visualizations.
- Code-level audit: Each stage is backed by generated code available for inspection, which is what makes the output production-grade instead of a one-off query.
- One shared artifact: The whole churn system lives as a single workflow with visual for reviewers, code for engineers, rather than scattered scripts and one-off notebooks.
With Prophecy, your analysts can prepare trusted data and ship analytics workflows at the pace the business actually needs, while your platform team keeps full control. Book a demo to see the AI agents in action.
FAQ
What's the difference between an analytics workflow and an ETL pipeline?
ETL pipelines, owned by data engineers, move and load raw data into a cloud data platform like Databricks, Snowflake, or BigQuery. Analytics workflows, owned by analytics teams, take that governed data and transform it further for specific analysis, dashboards, or ad hoc questions.
Does visual + code replace data engineering?
No. Data engineers continue to own ETL, ingestion, and platform governance. Visual + code expands who can build analytics workflows on top of that foundation, and it doesn't shift core data management responsibilities to analysts.
How do AI agents fit into self-service analytics?
AI agents handle different tasks across the analytics workflow, including suggesting transformations, documenting logic, validating joins, and preparing datasets. They make self-service realistic by removing the code-writing barrier, while engineers' standards still govern what ships.
Can visual + code work with my existing BI tools?
Yes. Visual + code platforms prepare and transform data, while BI tools like Tableau, Power BI, and Looker handle visualization and reporting. The two are complementary, and better-prepared data lets BI tools do what they do best.
Ready to see Prophecy in action?
Request a demo and we’ll walk you through how Prophecy’s AI-powered visual data pipelines and high-quality open source code empowers everyone to speed data transformation

