PALO ALTO NETWORKS is a revolutionary and dynamic company creating next generation enterprise security products. If you are a motivated, intelligent, creative, and hardworking individual who wants to contribute and make a difference, this job is for you!
We are the global cybersecurity leader, known for always challenging the security status quo. Our mission is to protect our way of life in the digital age by preventing successful cyberattacks. This has given us the privilege of safely enabling tens of thousands of organizations and their customers. Our pioneering Security Operating Platform emboldens their digital transformation with continuous innovation that seizes the latest breakthroughs in security, automation, and analytics.
JOB OVERVIEW:
You will be responsible for architecture, design and development of the query, reporting and analytics components of the Application Framework for the Palo Alto Network Threat Intelligence group. Actively participate in design, evolution and implementation of connectivity tools that allow ability to search and retrieve information from the Framework. Help design and implement reporting backend for efficient execution of queries on petabyte scale data store using innovative techniques. Key to success for this role is strong SQL and No SQL.
RESPONSIBILITIES:
- As a Big Data Java Engineer, you will be an integral member of our data processing team with focus on efficient query execution on a large multi-tenant data store.
- Develop libraries and tools that allow a variety of external tools and applications to connect to the data store and run the queries using standardized open protocols like JDBC as well as proprietary rest APIs.
- Communicate and work with Product management, operations, and quality assurance groups to achieve ease of use and adoption, quality, diagnosability and operability of these components.
- Suggest and implement improvements to development tools and processes.
QUALIFICATIONS:
- BS in Computer Science/Engineering, or equivalent experience
- 5+ years of experience in design and implementation in a data processing environment with large volumes of data
- Some systems design experience that involves designing a solution that involves multiple inter-operating components
- Good understanding of SQL and No SQL databases, to some degree their internals and database connectivity standards like JDBC
- Hands on with Java, object modelling and generally aware of trends and techniques in Java world. Some exposure to Scala may be desirable but not a must.
- Experience working with Hadoop and/or Elasticsearch ecosystems and Cloud-based deployments is a plus
- Self-driven with passion for innovation with can-do attitude on problem-solving, quality and ability to execute
- Good interpersonal and teamwork skills