TL;DR
- Analytics teams benefit from complementary, AI-accelerated tooling designed for their needs.
- As teams evaluate cloud migration from desktop tools like Alteryx, Prophecy offers a governed, cloud-native approach to agentic data preparation that runs directly on existing infrastructure with no retraining or rip-and-replace required.
- Code-first tools are powerful for engineers, but analytics teams accustomed to visual workflows benefit from AI-accelerated tools that bridge the skills gap without re-centralizing work.
- Prophecy enables AI-accelerated data preparation on cloud data platforms with AI agents, visual workflows, governed code execution, and no separate compute layer.
- Prophecy's cloud-native architecture eliminates duplicate infrastructure and data movement costs, consolidating analytics spend into the platform you already operate.
80% of analysts still can't build their own data workflows. Among non-IT professionals, the share who can build their own BI and data workflows has remained at 20% for the last decade, and for analytics teams on AWS, that number tells a familiar story. Data engineers have done their part by building Extract, Transform, Load (ETL) pipelines and governing data in the cloud data platform. Analysts still face bottlenecks when they need to further transform that data, build data workflows (sometimes also referred to as data pipelines), or run ad hoc queries for analysis.
The business wants fast, trusted, accurate data, and analysts want to deliver it without waiting on engineering. With Prophecy's agentic data prep, analysts build and run governed data workflows themselves on your cloud platform, within your guardrails. Engineering stops being the bottleneck, and the business gets what it's been asking for. This article examines the current tooling landscape for analytics teams on AWS and how Prophecy's AI-accelerated approach changes the equation.
AWS-native tools and where analytics teams fit
Analytics teams building their own data workflows on AWS often need complementary tooling beyond what's designed for data engineering. AWS offers a deep stack of data services for ETL pipelines, data ingestion, and governance, but each one targets a different skill set and use case.
The following breakdown shows how the landscape is typically organized:
- AWS Glue DataBrew: A visual, no-code option with 250 data transformations, DataBrew is designed to prepare and normalize datasets. Analytics teams that need scheduled data workflows, workflow branching, or multi-source orchestration might pair DataBrew with additional services depending on the use case.
- Amazon Athena: A query service for running Structured Query Language (SQL) queries on cataloged data. Cataloged and partitioned data must be in place before Athena can work effectively, and data engineering teams typically handle this using Glue. Managing table formats like Iceberg at scale can require dedicated teams for maintenance, compaction, and access controls.
- Step Functions and Amazon EMR: Orchestration and processing tools designed for engineering workflows. Step Functions uses Amazon States Language syntax, and in practice, data engineers build the workflow orchestration. These tools aren't designed for the analytics team's self-service.
For analytics teams looking to build their own data workflows on governed data, AI-accelerated tools like Prophecy complement the AWS stack by providing visual workflows and an agentic experience with multiple AI agents. This approach reduces bottlenecks caused by limited engineering resources, complicated prioritization, and inefficient handoffs. Want to see what that looks like? Try Prophecy free.
The case for cloud-native architecture
Cloud-native architecture eliminates the duplicate infrastructure and data movement costs that desktop-originated tools introduce. Many analytics teams built their workflows on desktop tools like Alteryx Designer, and as organizations migrate to cloud data platforms, those teams are evaluating how their analytics workflows fit into the new architecture. On AWS, that transition raises questions about cost structure and where data processing happens.
Desktop-originated architectures might move data out of the cloud platform for processing or spin up separate Elastic Compute Cloud (EC2) infrastructure on top of the clusters an enterprise already operates. For context, Alteryx One Starter Edition lists at $250 per user per month on the AWS Marketplace, and cost governance considerations include cloud costs that can spike as workloads grow, with no built-in estimator that shows the cost of a data workflow before users hit run.
Prophecy takes a cloud-native approach and runs directly on cloud data platforms like Databricks, Snowflake, or BigQuery. There's no separate compute layer, no data movement, and no duplicate infrastructure. Your platform team stays in control of compute, governance, and security. For analytics leaders watching cloud spend, this consolidates cost layers—instead of managing platform compute, separate analytics infrastructure, data egress fees, and per-user licensing independently, teams work within a single cost structure tied to infrastructure they already own.
For teams pulling data workflows into their cloud data platform, Prophecy's transpiler makes migration straightforward on existing compute.
Bridging the skills gap for analytics teams
The skills gap between code-first tools and analytics team capabilities is a primary factor in failed migration projects. Code-first tools are technically powerful and serve data engineering teams well, but when analytics teams evaluate them for building their own data workflows, the required skill sets pose a real barrier. For teams using visual workflows, these tools serve a different audience and workflow style.
Handing analysts raw AI code generation tools introduces a different consideration. Without standardization, human review, and Git retention, teams might gain speed without reliability and end up with a codebase no one can maintain.
Many teams recognize the resulting pattern. Cost or cloud strategy triggers a migration decision, and teams select the replacement tool on technical merit. Analytics teams that owned visual workflows may not have the coding skills the new tool requires, so work re-centralizes to data engineers, and the engineering backlog grows rather than shrinks. Prophecy addresses this by combining multiple AI agents with visual workflows so analytics teams can build data workflows independently, with speed, efficiency, and confidence, while working alongside the code-first tools their engineering counterparts prefer.
Prophecy enables agentic, AI-accelerated data preparation
Prophecy runs directly on your cloud data platform, and analytics teams work with governed data that data engineers have already prepared. Here's what that looks like in practice.
Visual workflows that compile to open-source code
Analysts build data workflows using a drag-and-drop gem architecture similar to the visual canvas many teams already know. Every visual workflow compiles into open-source code, so the data platform team sees real code rather than a black box. With this visual single-source model, you iterate visually while the underlying codebase remains the single source of truth.
Multiple AI agents that accelerate every step
Prophecy's AI agents turn business intent into visual workflows without working autonomously. Different agents handle different tasks, such as generating transformation logic, suggesting fixes, or creating documentation. They use real-time canvas interaction that shows steps transparently and invites feedback through natural language. AI with human review, standardization, and Git retention delivers the speed of AI with the reliability of engineering. Analysts generate, refine, and deploy their work while remaining in control throughout.
Governed by design through Unity Catalog
Prophecy integrates directly with Unity Catalog, providing single sign-on (SSO), consistent governance, and native code execution. Agent permissions are bound by user permissions, so analysts work only with data they're authorized to see. Every data workflow change is version-controlled in Git, and data management and governance stay with the data engineering team.
Production-ready failure handling analysts can use
When data workflows break in production, Prophecy surfaces the failing operator on the visual canvas with the broken data visible and an AI-suggested fix. Analysts don't need to decipher stack traces—they can identify, understand, and resolve issues directly in the visual interface.
A migration path for existing visual workflows
Teams often start with the efficiency use case by adopting a faster way to build and manage data workflows alongside their existing workflows. When the value is clear, migration follows naturally. For teams transitioning from other visual tools, one getting-started path is to import existing workflows from existing platforms. The transpiler accelerates migration so teams can point to real progress quickly.
The cost structure is simpler
Prophecy Enterprise Express is listed on the AWS Marketplace at $48,000/year for up to 20 users as a single dedicated software-as-a-service (SaaS) instance with no additional Prophecy infrastructure costs. Compare that with a dual-cost-stream model that combines platform licensing with separate EC2 and Elastic Block Store (EBS) infrastructure, layered on top of the cloud data platform you're already running.
Build self-service data workflows on AWS with Prophecy
Analytics teams on AWS have access to plenty of data. The remaining challenge is what comes next: transforming data for analysis, building data workflows, and running ad hoc queries without submitting tickets to engineering. Prophecy closes that gap as an AI-accelerated data preparation platform that lets analysts work independently on governed data within your existing cloud infrastructure.
Prophecy gives analytics teams these core capabilities:
- AI agents: Multiple agents guide analysts step-by-step through transformation logic, fix suggestions, and data workflow development via natural language. Each agent handles a specific task, from generating joins to improving data quality, so analysts stay productive without writing code.
- Visual workflows and open-source code: Analysts build on a drag-and-drop canvas while every data workflow compiles to auditable open-source code. Engineering teams get full visibility into the underlying code, and every change is version-controlled.
- Built-in governance: Integrates with Unity Catalog, Git-based versioning, and role-based access controls (RBAC), with no separate governance layer required. Data management and governance remain with the data engineering team.
- Cloud-native execution: Runs directly on Databricks, Snowflake, or BigQuery with no separate compute layer. Your platform team stays in control of infrastructure, security, and cost.
Prophecy vs. Alteryx—head-to-head
Analytics leaders are identifying the productivity gap and looking for a better path. Data platform leaders want efficiency, data quality, and something their engineering team can trust and govern. Prophecy speaks to both by making analysts self-sufficient while giving platform teams full visibility and control.
With Prophecy, your analytics team can build production-ready data workflows faster, turning governed data into insights independently while data engineers stay focused on what they do best. Book a demo to see how Prophecy's AI agents and agentic AI features can accelerate your team's analytics.
FAQ
What are self-service data workflows on AWS?
Self-service data workflows let analysts independently transform governed data and build analytics workflows using AI agents and visual tools, without submitting tickets to data engineering teams. ETL pipelines handle data ingestion; data workflows for analytics focus on what comes next.
How does Prophecy work with data already in the cloud platform?
Prophecy connects to cloud data platforms like Databricks, Snowflake, or BigQuery, letting analysts build data workflows on data that data engineers have already prepared and governed. Compute, governance, and security remain within your existing stack.
Does Prophecy replace data engineering tools or ETL pipelines?
No. Prophecy is designed for analytics teams to transform governed data and build data workflows. ETL pipelines, data ingestion, and data governance remain the responsibility of data engineering teams. Prophecy is typically used alongside other data tools.
Can I migrate existing workflows into Prophecy?
Yes. Prophecy's transpiler makes migration from tools like Alteryx straightforward. Teams typically start alongside existing tools, and migration follows naturally as the value becomes clear.
Ready to see Prophecy in action?
Request a demo and we’ll walk you through how Prophecy’s AI-powered visual data pipelines and high-quality open source code empowers everyone to speed data transformation

