<Back to Search
Data/Information Architect
circle k
Web Search Portals, Libraries, Archives, and Other Information ServicesComputing Infrastructure Providers, Data Processing, Web Hosting, and Related ServicesMedia Streaming Distribution Services, Social Networks, and Other Media Networks and Content ProvidersManagement, Scientific, and Technical Consulting ServicesOther Professional, Scientific, and Technical Services
Phoenix, AZApril 6th, 2026
Databricks ArchitectMandatory Skills:Azure Databricks, Azure data lake, Ansible, SQL, Terraform, Kafka, Azure CLI, Databricks CLI, PowerShell and/or Bash.
What Could Set You Apart:
Must have designed the E2E architecture of unified data platform covering all the aspects of data lifecycle starting from Data Ingestion, Transformation, operations, and consumption.
Experience in handling the key activities for the Enterprise Architecture practice for a specific sector.
Managing and mentoring architecture talent in the respective sector.
Should have delivered architecture initiatives and showcased clear business efficiency in line with business strategies. Should have taken end-end responsibility & accountability in terms of architecture.
Should have delivered architecture roadmaps and develop delivery blueprints for the technology design.
Excellent communication skills and customer management skills.
Should be able to deliver architectural initiatives in the space of digital, cloud, open-source technologies. Should have performed a similar role & capacity for large transformational engagements.
Hands on development Design and Develop applications using Databricks. Experience with Azure, AWS, or other cloud technologies.
In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib.
Good to have any programming language experience with Python/ SQL or Spark/Scala.
Strong understanding of Data Modeling and defining conceptual logical and physical data models.
To be successful in this role, this individual must be familiar with data integration, data modeling, data warehousing, big data technologies, and should be the subject matter expert.
Maintains close awareness of new and emerging technologies and their potential application for service offerings and products.
Experience in architecting and designing technical solutions for cloud-centric solutions based on industry standards using IaaS, PaaS, and SaaS capabilities.
Experience building and supporting mission-critical technology components with DR capabilities.
Experience with multi-tier system and service design & development for large enterprises.
Exposure to infrastructure and application security technologies and approaches.
Familiarity with requirements gathering techniques.
Required Qualifications:
Must have excellent coding skills either Python or Scala, preferably Python.
Must have at least 10+ years of experience in architecture, design, implementation, and analytics solutions with total of 12+ years in Data Engineering domain.
Must have designed and implemented at least 2-3 projects end-to-end in Databricks.
Must have at least 3+ years of experience in databricks which consists of various components as below.
Delta lake
dbConnect
db API 2.0
SQL Endpoint Photon engine
Unity Catalog
Databricks workflows orchestration
Security management
Platform governance
Data Security
Must have followed various architectural principles to design best suited per problem.
Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments.
Must have strong understanding of Data warehousing and various governance and security standards around Databricks.
Must have knowledge of cluster optimization and its integration with various cloud services.
Must have good understanding to create complex data pipelines.
Must be strong in SQL and spark-sql.
Must have strong performance optimization skills to improve efficiency and reduce cost.
Must have worked on designing both Batch and streaming data pipeline.
Must have extensive knowledge of Spark and Hive data processing framework.
Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, Cloud databases.
Must be strong in writing unit test case and integration test.
Must have strong communication skills and have worked with cross-platform teams.
Must have great attitude towards learning new skills and upskilling the existing skills.
Responsible for setting best practices around Databricks CI/CD.
Must understand composable architecture to take full advantage of Databricks capabilities.
Experience with machine learning tools such as mlFlow, Databricks AI/ML, Azure ML, AWS sagemaker, etc.
Experience in distilling complex technical challenges to actionable decisions for stakeholders and guiding project teams by building consensus and mediating compromises when necessary.
Experience coordinating the intersection of complex system dependencies and interactions.
Experience in solution delivery using common methodologies especially SAFe Agile but also Waterfall, Iterative, etc.
Preferred Qualifications:
Good to have Rest API knowledge.
Good to have understanding around cost distribution.
Good to have knowledge on migration project to build Unified data platform.
Good to have knowledge of DBT.
Experience around DevSecOps including docker and Kubernetes.
Software development full lifecycle methodologies, patterns, frameworks, libraries, and tools.
Knowledge of programming and scripting languages such as JavaScript, PowerShell, Bash etc.
Experience with data ingestion technologies such as Azure Data Factory, SSIS, Pentaho, Alteryx.
Experience with visualization tools such as Tableau, Power BI.
Demonstrated knowledge of relevant industry trends and standards.
889 matching similar jobs near Phoenix, AZ
- Azure With Big Data/Cloud Services (Anywhere in the US)
- Data Engineer
- Data Engineer
- Senior Data Engineer
- ETL Ab nitio Consultant
- Lead Data Governance (GCP & GenAI)
- Senior Salesforce Marketing Cloud Developer
- Data Architect(33397)
- Oracle PL/SQL Developer - Informatica Exp
- Informatica IICS Developer
- Technology Engineer (C#/SQL Application Programming & Support)
- Data Developer/Sr (31086)
- Data engineer with Java, Kafka, K SQL
- Data Analyst
- Big Data Architect
- Apache Airflow Data Engineer
- Data Engineer
- Data Engineer
- Hadoop Analyst
- Senior Data Analyst-Phoenix AZ (Onsite)
- Business Intelligence / Data Warehouse Developer/SR (40189)
- Big Data / Hadoop Consultant
- Apache Airflow Data Engineer
- Technology and Data - Software Engineer 3 - ContingentPhoenix, AZApril 6th, 2026
- Senior Data Engineer (20187)
- Power BI Developer
- ETL Architect - Retail Domain @ Phoenix, AZ (Onsite)
- Big Data Architect
- Data Engineering Manager: Snowflake Pipelines & Analytics
- Senior ETL & IT Data Analyst
- Sr Data Developer - ETL/Database (27572)
- SSIS Developer (30885)
- Google Cloud Platform Engineer
- Java / Big Data / Python (GC-USC)
- Informatica and Teradata
- Reporting Developer (30865)
- Data Engineer: Scale Data Platforms & PtO Pipelines
- Data Engineer
- ETL Netezza Consultant
- Site Reliability Engineer Upskilling Start Date Jan 27th!