Job Title - Senior Software Engineer
Location: Seattle WA 98104, Onsite
Job Type - FTE
Description/Comment: As a Senior Software Engineer within Enterprise & Consumer Data, you will help design, build and support applications providing massive amounts of data and data related services to consumers across all of the Disney companies with focuses in Identity, Device, and User 360 data and applications.
This will involve building pipelines of near-real-time data processing, batch processing, analytics and APIs to ingest, orchestrate, expose and link data.
You will be exposed to all phase of the software development process from business use-case refinement, architecture, implementation, testing, deployment to support and will have opportunities to work on challenges and problems not found in most companies due to volume, scale, near-real-time and latency requirements.
You will be building on cloud-based technology often pushing the edge of what is available and solving for scalability, reliability, resilience and performance.
You will be working with both upstream providers of data from internal teams to external vendors and downstream internal consumers in addition to other tech teams such as architecture, infrastructure/platform, devOps, secOps, QA, data analysts and data science to design and build systems which can elastically scale to handle massive amounts of data. You will also work closely with the Technical Project Manager(s) and Product Owners on task definition, estimation and refinement.
Basic Qualifications:
This is a hands-on software development position with the majority of your day focused on writing code, researching, troubleshooting and resolving issues and improving applications through business requests, technical debt improvements, and R&D of new technology.
You will also help support several legacy applications as we migrate functionality to new applications.
Candidates will have a strong development background with a focus in Python along with experience in building enterprise level applications, databases, data warehouses and familiarity with ETL type technologies.
Primary skills:
Solid experience with Python
Extensive PySpark/Spark experience
Relational database experience (DDL, DML/SQL)
Big Data and tools (EMR, Hadoop, Hive, Databricks, Snowflake, DynamoDB)
AWS experience (basics, ie S3, SNS, SQS, EC2/ECS/Lambda, IAM, Sec Groups, ALB)
Experience with messaging and/or streaming systems (Kafka, Kinesis, Spark Streaming)
Docker/containerizing technologies
Solid experience with Linux, shell scripting
Writing unit tests, integration testing, regression testing, TDD
6+ years profession Python development
5+ years experience using analytic SQL working with traditional relational databases and/or distributed systems such as Hadoop/Hive, BigQuery, Snowflake/Redshift
3+ years development in language other than Python (Java, Scala, Go, C)
3+ years of hands-on experience with AWS
Comfortable working in a fast-paced and highly collaborative environment
Comfortable with changing specifications and standards, and leading the team to deliver through ambiguity
Strong communication skills written and verbal presentations and able to communicate effectively at all levels
Ability to describe highly technical detail in terms the audience will understand
Ability to balance short-term requirements with long-term vision in implementation balancing immediate business benefit against future growth and flexibility
Strong organizational and self-management skills with the ability to multi-task and work both independently and within a team
- Understanding and have worked with Agile processes (Scrum, Kanban)
Preferred Qualifications
Scala and/or Java, Go
Familiarity with data exploration / data visualization
Graph Databases and query languages (Neptune, Janus, Neo4j, Gremlin/Cypher)
Grafana/PrometheDatadog
Orchestration/workflow management tools (Airflow, Nifi)
CloudFormation, Terraform, AWS CDK
Other non-AWS cloud technology/vendors