About Us
Do you want to be a part of the clean energy movement? Are you passionate about improving our environment for this generation and those to follow? Are you ready to take on new challenges and collaborate with a future-focused team leading the way into new markets? Join Nexamp!
This is where you can learn from industry leaders and become one yourself. It's fast-paced, mission-based work that challenges the status quo. Be on the team that's changing the world.
As a member of our team, you'll have the opportunity to work closely with our US counterparts, gaining valuable experience and exposure to the latest technologies in the American market, all while working remotely from Guatemala.
Job Description
Nexamp is seeking a Cloud Data Engineer to help build a common data platform and asset to support the company’s mission to address climate change by making solar power accessible for everyone.
Proficiency in English, both spoken and written, is required to effectively communicate and collaborate with U.S.-based counterparts.
We are looking for a true engineer — someone who can understand business needs and construct relevant data pipelines and solutions using cloud-based services. The Data Management team is small but growing rapidly, so we need self-starters who are eager to innovate and collaborate to create a new data landscape to optimize everything from the construction and operation of solar arrays to the acquisition and retention of customers for Nexamp’s evolving decarbonization product set.
Key Responsibilities
Design and implement an Azure-based modern data platform, including data pipeline, security, messaging, and quality frameworks.
Build configuration-driven data pipelines to standardize source data into a common data asset and generate business-specific outputs.
Work closely with Data Scientists and application developers to meet their data requirements.
Diagnose, debug, and correct defects in data pipelines and systems.
Create scripts and programs to automate data operations.
Monitor, maintain, and proactively improve the reliability of data pipelines.
Participate in testing, QA, and documentation activities.
Qualifications
Experience:
3–5 years of hands-on experience in a data engineering role.
Skills:
Proficiency in English (spoken and written).
Strong programming skills in Python, particularly for data processing (e.g., Pandas, PySpark).
Experience building multi-stage data pipelines using cloud-based technologies (Azure preferred, AWS or GCP experience acceptable).
Strong knowledge of SQL and relational databases (e.g., SQL Server, MySQL).
Experience working with distributed systems such as Apache Spark.
Experience with source control tools (e.g., Git, Azure DevOps, GitHub, Bitbucket).
Familiarity with workflow orchestration concepts (e.g., task dependencies, retries, error handling).
Ability to debug and proactively improve pipelines and systems.
Strong communication and collaboration skills across technical and non-technical teams.
Comfort working independently in a fast-paced, Agile Scrum environment.
Solid problem-solving skills and the ability to learn independently.
Preferred / Nice-to-Have Skills:
Hands-on experience with Microsoft Azure services (e.g., Azure Data Factory, Azure Databricks).
Experience with NoSQL or graph databases such as MongoDB or Redis.
Familiarity with data pipeline monitoring and alerting best practices.
What We Offer
Competitive salary.
Opportunity for professional growth and development.
Supportive and collaborative work environment.
Exposure to cutting-edge technologies and methodologies.
Take part in meaningful work and have the chance to change the world alongside innovative, dedicated, and motivated peers.