Skip to main content

This week’s Cloud Computing Jobs – AWS and Linux Administration

(Update March 2019) To get a definition of the roles needed to maximize your organization’s investment in cloud, explore the latest skills in demand by job role with Cloud Academy’s Cloud Roster™.

Cloud Academy is always on the lookout for the most promising Cloud Computing opportunities.
Employers: interested in reaching our readers with your job openings? Send us an email.
This week we are back to searching for interesting AWS jobs worldwide. With no particular criteria except…AWS. You will find opportunities below in the U.S, on both the East Coast and the West Coast, Australia, and Canada.

1.  Data Engineer

GrubHub – New York, NY, US

Job Description:
Be the first of your friends to declare, “I love where I work!” and actually mean it. Laugh hard and work hard with some of the best and brightest in the tech industry.
GrubHub Holdings Inc. is the nation’s leading online and mobile food-ordering company dedicated to connecting hungry diners with local takeout restaurants. The GrubHub Holdings Inc. portfolio of brands includes GrubHub, Seamless, MenuPages and Allmenus. The company’s online and mobile ordering platforms allow diners to order directly from thousands of takeout restaurants across the country and London, and every order is supported by the company’s 24/7 customer service. GrubHub Holdings Inc. has offices in Chicago, New York City and London.

Roles and Responsibilities:

  • Experience with columnar storage and massive parallel processing data warehouses (Redshift preferred)
  • Experience modeling and querying for NoSQL databases (Cassandra preferred, HBase acceptable)
  • Experience working within the Amazon Web Services (AWS) ecosystem (S3, EC2, etc)
  • Develop compelling PoC’s for data solutions using emerging technologies for real-time and big data ingestion and processing
  • Contribute to designing, building, and deploying high-performance production platforms/infrastructure to support data warehousing, real-time ETL, and and batch big-data processing; help define standards and best practices for enterprise usage
  • Design, build, and maintain processes and components of a streaming data/ETL pipeline to support real-time analytics (from requirements to data transformation, data modeling, metric definition, reporting, etc)
  • Focus on data quality – detect data/analytics quality issues all the way down to root cause, and implement fixes and data audits to prevent/capture such issues
  • Collaborate with data scientists to design and develop processes to further business unit and company-wide data science initiatives on a common data platform
  • Translate business analytic needs into enterprise data models and ETL processes to populate them

2.  DevOps Engineer

ThoughtWorks – San Francisco, USA

Job Description:

At ThoughtWorks our dedication to the art of software delivery has long meant driving deeper collaboration between different parts of an organization. We literally wrote the book on Continuous Delivery, which recognizes the deeper role of infrastructure and operations as an integral part of the delivery process, and have been an active part of the DevOps community since the beginning.

As a DevOps Engineer at ThoughtWorks you are responsible for bringing and spreading the knowledge, ideas, and hands-on implementation skills needed to deliver and run software services.
We help our customers to adopt DevOps approaches, break out of rigid, traditional ways of working and move to more customer-focused and agile approach. We currently have multiple positions for experts in infrastructure as code and DevOps to join us.

Roles and Responsibilities:

  • Extensive experience working with server virtualization (VMWare, Xen, etc.), IaaS and PaaS cloud (AWS, Azure, GCE, Rackspace, Digital Ocean, Heroku, OpenStack, CloudStack, CloudFoundry.)
  • Infrastructure provisioning tools (such as Docker, Chef, Puppet, Ansible, Packer, CloudFormation, Terraform)

As a DevOps Engineer at ThoughtWorks you are responsible for ensuring that the team and client have an understanding of operational requirements, and take shared responsibility for designing and implementing the infrastructure for delivering and running software services. This includes hands-on involvement in building deployment and testing pipelines, automated provisioning of cloud infrastructure, and infrastructure support services such as monitoring. There are a lot of moving pieces to fit together so communication is essential to ensure stuff is not missed. You will be depended upon for advice regarding the cross-functional aspects of user stories which may not always be obvious from the start. Watching out for performance bottlenecks and scaling pitfalls are all within the realms of an Infrastructure Developer at ThoughtWorks. In addition to technical skills, at ThoughtWorks we also need excellent coaches so your patience and a desire to take others along with you is absolutely key. If this sounds appealing then we want to talk to you!

3.  Linux Systems Administrator / DevOps Engineer

Finite IT Recruitment Solutions – Melbourne

Job Description:
This role is part of the infrastructure team that manages the physical and virtual systems and networks that the company runs on. You will be responsible for maintaining and growing the companies’ infrastructure and systems, and you’ll play a crucial role in how the team manages core programs and processes.
On your bow you will have 5 or more years of experience building and managing Linux systems (they use Ubuntu and RHEL) in a highly available and scalable environment. You will also have extensive experience using Amazon Web Services (AWS), since it plays a big role in their current and future infrastructure plans.

Roles and Responsibilities:

  • Managing and optimizing AWS cloud infrastructure.
  • Improving their logging and monitoring processes.
  • Debugging critical infrastructure issues and general on-call duties.
  • Creating secure and scalable systems in AWS.
  • Experience using sysctl to tune performance.
  • Experience operating highly available, large-scale distributed systems.
  • Apache and mod_wsgi, Ngnix, Node.js.
  • Load Balancing (Big-IP F5).
  • Databases(PostgreSQL, MySQL, Redis, Cassandra).
  • Backup and Recovery.
  • Monitoring and Performance Tuning (Zenoss, statsd/graphite, Logstash, ElasticSearch).

4.  Build Master/Configuration Specialist (AWS, Linux, GIT, Puppet, etc.)

TouchTunes Interactive Networks – Montreal

Job Description:

In the clouds do you enjoy wielding the power to create, scale up and scale down compute clusters? Are you a technical puppet master able to automate and extend build systems?

Do you think not using a button on a web browser to deploy an entire compute cluster is so 2010? Are you an entrepreneurial spirit with strong closing abilities? Do you dream of containerization and virtualization systems and think the cloud is not big enough for the software you help deploy?

Then TouchTunes may be looking for you!
TouchTunes is the largest in-venue interactive music and entertainment platform, featured in over 71,000 bars and restaurants across North America and Europe. Our network supports a growing portfolio of location-based digital solutions that encourage social interactions through shared experiences.

TouchTunes is looking for software   Build Master / Configuration Specialist to work in our Services Platform Division.
The Build Master /Configuration specialist works with architects, production leads, technical leads, developers, and IT staff to analyze, develop and create development infrastructures; code repository and review systems; and containerization and deployment software for the TouchTunes Services Platform.

Roles and Responsibilities:

  • Maintain and evolve various build process: Mobile Application, Backend Server and REST APIs on AWS.
  • Help design, build, test and configure our next generation containerized micro-services platform (Docker).
  • Maintain and update the build tools (Java, php, rpm, Git,  JHBuild, Maven, Jenkins, Gerrit) and create tools to report the status of builds and their content.
  • Create and maintain puppet configurations.
  • Create tools to provide adequate documentation for managers and developers.
  • Administrate a knowledge-based (Confluence) site that is used by 5 development groups (Montreal, Chicago, New York, Bangalore, Russia) and develops plugins (Java) to enhance functionality.
  • Enforce the “continuous integration” paradigm for every development team.
  • Support various development and operations groups by providing them a standardized process, from code push to deployment.

Written by

Michael Sheehy

I have been UNIX/Linux System Administrator for the past 15 years and am slowly moving those skills into the AWS Cloud arena. I am passionate about AWS and Cloud Technologies and the exciting future that it promises to bring.

Related Posts

Joe Nemer
Joe Nemer
— April 1, 2019

AWS EC2 Instance Types Explained

Amazon Web Services’ resource offerings are constantly changing, and staying on top of their evolution can be a challenge. Elastic Cloud Compute (EC2) instances are one of their core resource offerings, and they form the backbone of most cloud deployments. EC2 instances provide you with...

Read more
  • AWS
  • EC2
Sanket Dangi
— February 11, 2019

WaitCondition Controls the Pace of AWS CloudFormation Templates

AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...

Read more
  • AWS
  • formation
Andrew Larkin
— January 24, 2019

The 9 AWS Certifications: Which is Right for You and Your Team?

As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing.As the market leader and most ma...

Read more
  • AWS
  • AWS certifications
Andrew Larkin
— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...

Read more
  • AWS
  • EC2
  • re:Invent 2018
Guy Hummel
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...

Read more
  • Aviatrix
  • AWS
  • VPC
Jeremy Cook
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
Guy Hummel
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
Stuart Scott
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
Stuart Scott
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
Cloud Academy Team
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
  • SpotInst
Guy Hummel and Jeremy Cook
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning