Vivo

Arlington, Georgia, United States

Job Type: Contract

We are looking for a skilled Data Engineer to design, develop, and maintain scalable data pipelines and infrastructure. The ideal candidate will be responsible for collecting, storing, processing, and analyzing large datasets to support business intelligence and analytics needs. You will work closely with data scientists, analysts, and software engineers to ensure efficient data flow and accessibility.

Key Responsibilities:

  • Design, develop, and optimize data pipelines, ETL processes, and data warehouses.
  • Build and maintain real-time and batch data processing solutions.
  • Develop and maintain data models, schemas, and databases for structured and unstructured data.
  • Implement and maintain data integration solutions across multiple systems.
  • Monitor and troubleshoot data pipelines to ensure reliability, accuracy, and efficiency.
  • Collaborate with data scientists and analysts to support machine learning models and business intelligence.
  • Ensure data quality, governance, and security compliance.
  • Optimize and improve database performance and storage solutions.
  • Work with big data technologies, cloud services (AWS, GCP, Azure), and distributed systems.
  • Automate data workflows and CI/CD pipelines for data processing.

Required Skills & Qualifications:

  • Bachelor’s/Master’s degree in Computer Science, Data Engineering, or a related field.
  • 3+ years of experience in data engineering, data architecture, or related fields.
  • Strong experience with SQL and NoSQL databases (PostgreSQL, MySQL, MongoDB, etc.).
  • Expertise in ETL/ELT tools (Apache Airflow, Talend, dbt, etc.).
  • Hands-on experience with Big Data technologies (Hadoop, Spark, Kafka, Flink, etc.).
  • Proficiency in programming languages (Python, Scala, Java).
  • Familiarity with cloud data services (AWS Redshift, Google BigQuery, Azure Synapse, etc.).
  • Strong understanding of data modeling, warehousing, and data lake architectures.
  • Experience with APIs and data streaming frameworks.
  • Knowledge of containerization and orchestration tools (Docker, Kubernetes).
  • Strong problem-solving skills and ability to work in an agile environment.

Preferred Skills:

  • Experience in machine learning pipelines and MLOps.
  • Knowledge of Graph databases (Neo4j, Amazon Neptune).
  • Familiarity with CI/CD for data infrastructure.
  • Hands-on experience with data governance and security frameworks.
  • Experience working with real-time analytics and business intelligence tools (Tableau, Power BI).

Benefits & Perks:

  • Competitive salary and performance-based bonuses.
  • Health, dental, and vision insurance.
  • Flexible working hours and remote work options.
  • Learning and career development opportunities.
  • Generous paid time off and parental leave.
  • Employee wellness programs and team-building activities.
Apping Technology

Robbinsville, New Jersey, United States

Job Type: Full-Time

Apping Technology is an innovative technology company specializing in AI-driven solutions, ERP systems, and cloud-based SaaS platforms. We focus on delivering scalable, high-performance applications with a strong emphasis on security, automation, and reliability.

Job Overview

As a Site Reliability Engineer (SRE) at Apping Technology, you will be responsible for designing, building, and maintaining scalable infrastructure, ensuring system reliability, automating workflows, and improving incident response. You will work closely with development, operations, and security teams to enhance our cloud-based services and ensure seamless performance.

Key Responsibilities

Reliability & Performance

  • Design, implement, and manage highly available and scalable infrastructure in cloud environments (AWS/Azure/DigitalOcean).
  • Monitor system performance, identify bottlenecks, and optimize for speed, resilience, and cost-efficiency.
  • Establish SLAs, SLOs, and error budgets to balance reliability and feature development.

Automation & Infrastructure as Code (IaC)

  • Develop and maintain IaC using Terraform, Ansible, or equivalent tools.
  • Automate deployment processes with CI/CD pipelines (GitHub Actions, GitLab CI/CD, Jenkins).
  • Implement auto-scaling, failover mechanisms, and automated recovery strategies.

Incident Management & Monitoring

  • Set up observability tools (Prometheus, Grafana, New Relic, Datadog, ELK stack) for proactive monitoring.
  • Handle incident response, root cause analysis (RCA), and post-mortem processes.
  • Ensure log management and monitoring solutions are in place for system health tracking.

Security & Compliance

  • Implement cloud security best practices (IAM, firewalls, encryption, vulnerability management).
  • Ensure compliance with industry standards like ISO 27001, SOC 2, GDPR.
  • Conduct periodic security audits and penetration testing.

Database & Infrastructure Management

  • Optimize and manage PostgreSQL, MySQL, and NoSQL databases for performance and availability.
  • Ensure regular backups, failover mechanisms, and disaster recovery plans.
  • Scale database solutions to meet business needs.

Collaboration & DevOps Culture

  • Work closely with development teams to integrate reliability into the software development lifecycle.
  • Enable developers to adopt DevOps best practices through self-service infrastructure and automation.
  • Provide training and documentation for incident response and best practices.

Qualifications & Skills

Must-Have:

  • 3+ years of experience in Site Reliability Engineering (SRE), DevOps, or Cloud Engineering.
  • Strong experience with AWS, Azure, or DigitalOcean (EC2, RDS, S3, IAM, Kubernetes, etc.).
  • Expertise in Linux administration, networking, and shell scripting.
  • Hands-on experience with Docker, Kubernetes, and container orchestration.
  • Proficiency in Terraform, Ansible, Helm, or other IaC tools.
  • Experience with monitoring & logging tools like Prometheus, Grafana, ELK, Datadog.
  • Familiarity with CI/CD pipelines and automation (GitHub Actions, GitLab CI, Jenkins).
  • Strong programming skills in Python, Go, or Bash scripting.

Good to Have:

  • Knowledge of serverless architectures and event-driven cloud computing.
  • Experience with cloud cost optimization strategies.
  • Exposure to AI/ML infrastructure in cloud environments.
  • Familiarity with multi-cloud and hybrid cloud setups.

Why Join Us?

  • Cutting-edge projects in AI, SaaS, and cloud computing.
  • Flexible work environment (remote options available).
  • Continuous learning & development opportunities.
  • Competitive salary & benefits package.
🤖 TalentMate - Workforce Pro:
Hi! I’m your TalentMate, here to help you ace your next interview. Ask me anything about interview tips, common questions, and best practices!