We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results

Data Engineer

Belcan Corporation
United States, Arizona, Glendale
Aug 11, 2025
Job Description

Job Title: Data Engineer
Location: Glendale, AZ
Zip Code: 85301
Duration: 12 months
Pay Rate: $66.71/hr.
Keyword's: #Glendalejobs; #DataEngineerjobs.
Start Date: Immediate

Job Description:
Client Corporation is seeking a talented and ambitious data engineer to join our team in designing, developing, and deploying industry-leading data science and big data engineering solutions, using Artificial Intelligence (AI), Machine Learning (ML), and big data platforms and technologies, to increase efficiency in the complex work processes, enable and empower data-driven decision making, planning, and execution throughout the lifecycle of mega-EPC projects
* You yearn to be part of groundbreaking projects and cutting-edge research that work to deliver world-class solutions on schedule
* Someone who is motivated to find opportunity in and develop solutions for evolving challenges, is passionate about their craft, and driven to deliver exceptional results
* You love to learn new technologies and mentor junior engineers to raise the bar on your team
* You are imaginative and engaged about intuitive user interfaces, as well as new/emerging concepts and techniques

Job Responsibilities:
* Big data design and analysis, data modeling, development, deployment, and operations of big data pipelines
* Collaborate with a team of other data engineers, data scientists, and business subject matter experts to process data and prepare data sources for a variety of use cases including predictive analytics, generative AI, and computer vision.
* Mentor other data engineers to develop a world class data engineering team
* Ingest, Process, and Model data from structured, unstructured, batch and real-time sources using the latest techniques and technology stack.

Basic Qualifications:
* Bachelor"s degree or higher in Computer Science, or equivalent degree and 5+ years working experience
* In depth experience with a big data cloud platform such as Azure, AWS, Snowflake, Palantir, etc.
* Strong grasp of programming languages (Python, Scala, SQL, Panda, PySpark, or equivalent) and a willingness to learn new ones. Strong understanding of structuring code for testability.
* Experience writing database-heavy services or APIs
* Strong hands-on experience building and optimizing scalable data pipelines, complex transformations, architecture, and data sets with Databricks or Spark, Azure Data Factory, and/or Palantir Foundry for data ingestion and processing
* Proficient in distributed computing frameworks, with familiarity in handling drivers, executors, and data partitions in Hadoop or Spark.
* Working knowledge of queueing, stream processing, and highly scalable data stores such as Hadoop, Delta Lake, Azure Data Lake Storage (ADLS), etc.
* Deep understanding of data governance, access control, and secure view implementation
* Experience in workflow orchestration and monitoring
* Experience working with and supporting cross-functional teams

Preferred Qualifications:
* Experience with schema evolution, data versioning, and Delta Lake optimization
* Exposure to data cataloging solutions in Foundry Ontology
* Professional experience implementing complex ML architectures in popular frameworks such as Tensorflow, Keras, PyTorch, Sci-kit Learn, and CNTK
* Professional experience implementing and maintaining MLOps pipelines in MLflow or AzureMLFully vaccinated against the COVID-19 virus (proof required)

Belcan is an equal opportunity employer. Your application and candidacy will not be considered based on race, colour, sex, religion, creed, sexual orientation, gender identity, national origin, disability, genetic information, pregnancy, veteran status or any other characteristic protected by federal, state or local laws.

Applied = 0

(web-5cf844c5d-bzcc6)