Mod Op is looking to solve complex problems with innovative solutions by enabling data-driven decision-making through visualization tools and robust data pipelines.
Requirements
GCP or AWS (Data Engineering or ML focus) experience.
Strong proficiency in Python and experience with its data science libraries.
Experience with SQL (Teradata, BigQuery, etc.) and NoSQL databases.
Hands-on experience with Google Looker and Tableau for reporting and dashboards.
Experience working with CRM, marketing platforms, and analytics tools.
working GCP/Azure/AWS Services with ML workflows, model training, AI Models and deployment.
Knowledge with Alteryx for workflow automation and data preparation.
Responsibilities
Design, develop, and maintain scalable ETL/ELT pipelines in GCP, Azure & AWS using various services including Data Flow, Composer, Azure Synapse, AWS Data Pipelines and etc
Work with structured and unstructured data sources, including CRM and marketing data platforms.
Develop and optimize queries for SQL & NoSQL databases (Teradata, BigQuery, Cassandra, etc.).
Implementing Models using GCP Vertex ai and utilizing Python and its data science libraries (Pandas, NumPy, Scikit-learn, etc.) for data analysis and ML model deployment.
Build and manage dashboards using Google Looker and Tableau to provide business insights.
Work closely with data analysts, marketing teams, and other stakeholders to understand business needs and implement effective data solutions.
Other
The position operates under a hybrid work model, requiring in-office presence at the Grapevine, Texas location two days per week, with the remaining days worked remotely.
On the job training and career growth opportunities.
Access to LinkedIn courses.
Talented team environment, collaborative offices, fun company culture with a great balance of work and play.
Vacations are granted by day or weeks according to employee approved request.