Summary
Overview
Work History
Education
Skills
Certification
Timeline
Generic
IRFAAN Syed

IRFAAN Syed

Dallas

Summary

Results-driven Data Analyst with expertise in statistical analysis and data visualization. Proficient in Python, SQL, and Tableau, delivering actionable insights to optimize business performance. Strong background in project management and cross-functional collaboration, ensuring timely completion of data-driven projects.

Overview

12
12
years of professional experience
1
1
Certification

Work History

Lead Site Reliability Engineer (SRE)/Apigee

Freedom Mortgage
05.2021 - Current
  • Working as a SRE, for the Client Freedom Mortgage. The job responsibilities are.
  • Analyze, evaluate, design, and code for client specifications / requirements. Design and implementation of complex software systems.
  • As Apigee developer specialized in the design of predictive analytics software applications for identifying opportunities and risks along with API management for analysis of user engagement.
  • Worked on testing, implementation, and maintenance of Cloud hosting solutions.
  • Ensured Redundancy by incorporating high availability and conducted chaos testing and failure injection.
  • Worked closely to incorporate high redundancy to analyze system behavior under failure conditions and design improvements.
  • Used Kotlin for implementing new modules in the application.
  • Built terra grunt project to manage terraform configuration file DRY while working with multiple terraform modules.
  • Writing GO Language scalable, robust, testable, efficient, and easily maintainable code.
  • Worked closely with Kotlin Android extension frameworks.
  • Worked closely for Fault tolerance ability by designing the failure mode analysis and created resilient infrastructure by building multiple AZ/multi region systems to withstand zone or region level outages.
  • Created Azure cloud solutions that solve pain points and meet the business's needs.
  • Implemented Azure application insights to store user activities and error logging.
  • Extensive experience in developing applications in Single Page (SPA's) using various java script frameworks like Angular 2.0, React.JS and Back-end server like Node.js
  • Created a single page site to display the status of customer orders using React.js, Redux and EXT.js
  • Hands-on experience in Azure cloud services (PaaS and IaaS), Storage, Web apps, Active directory, Application Insights Data Factory
  • Worked closely for developing service applications to consume and integrate with Kenexa using GRAILS, Spring, Hibernate Groovy, and Oracle
  • Written JUnit test cases to test services implemented in Grails and Groovy.
  • Used browser plug-in Postman to test web services
  • Worked on Monitoring tools like Application Insights. Log analytics and creation of alerts
  • Designed Splunk Enterprise 6/5 infrastructure to provide high availability by configuring clusters across two different data centers
  • Writing Splunk queries, Expertise in searching, monitoring, analyzing, and visualizing Splunk logs
  • Designing, Optimizing, and executing Splunk-based enterprise solutions
  • Developed and implemented reliable cloud solutions.
  • Designed Splunk Enterprise 6.5 infrastructure to provide high availability by configuring clusters across two data centers.
  • Experienced in creating projects in Argo CD and deploying Argo CD into a Kubernetes cluster from scratch
  • Deployed into a Kubernetes cluster directly from GitHub using the Argo CD tool
  • Installed configured maintained and tuned Splunk enterprise server 6.x/5.x.
  • Expertise in writing the playbooks using the YAML scripting which manages the configurations also have experience in setting up master minion architecture in Kubernetes to maintain the containers.
  • Worked with AWS CFTs along with Ansible to render templates and also worked with Ansible YAML automation scripts to create infrastructure and deploy application code changes autonomously.
  • Had an experience with Chaos engineering to validate fault tolerance under real world failure scenarios
  • Integrated chaos tools like Gremlin, Chaos Monkey ,Litmus Chaos into pipelines.
  • Expertise in deep development or enhancement of open stack also had a experience in creating standards and patterns for deploying a spring boot data micro service architecture to Pivotal Cloud Foundry (PCF)
  • Ensured successful architecture and deployment of enterprise grade PaaS solutions using Pivotal Cloud Foundry (PCF) as well as proper operation during initial application migration and net new development
  • Performed field extractions and transformations using the Rege in Splunk.
  • Used New Relic, APM, Catch point, and HP BPM tools to monitor preproduction and production environment proactively and to identify application performance issues or availability
  • Created dashboard in New relic console for monitoring purpose
  • Worked with terraform templates to automate the Azure IaaS virtual machines using terraform modules and deployed virtual machine scale sets in production environments
  • Identified and fixed performance issues in a confidential instant of time by dynamic monitoring through catch points and new relic tools in the production environment.
  • Written Splunk queries, expertise in searching, monitoring, analyzing, and visualizing Splunk logs.
  • Used New Relic APM, Catch Point, and HP BPM tools to monitor preproduction and production environment proactively and to identify application performance issues or availability
  • Created dashboard in new relic console for monitoring purpose
  • Leveraged cutting-edge technology like Kotlin, Android JetPack, Retrofit, Navigation, ViewModel , Room, Actions
  • Experienced in writing apps from scratch in Kotlin
  • Defined telemetry requirements, measured telemetry cable running lists, and on all cable equipment cable runs.
  • Verified telemetry equipment per Venue directed by capacity management
  • Configured and installed the inventory and configured the equipment obtaining appropriate management approvals referencing applicable diagrams, and drawings and ensuring timely telemetry delivery
  • Implemented Dynatrace on different cloud technologies like AWS, Azure and GCP and had good Dynatrace app non implementation experience.
  • Installed and configured Splunk universal forwarders on both Unix (Linux, Solaris, and AIX) and Windows servers
  • Hands-on experience in customizing Splunk dashboards, Visualizations, and Configuration using customized Splunk queries
  • Scaled Azure cloud solutions to match the business's changing needs.
  • With GO Language Translating software requirements into stable, working, high performance software.
  • Played a key role in GO Language architectural and design decisions, building toward an efficient micro service distributed architecture.
  • Implemented Dynatrace on different cloud technologies like AWS, Azure, and GCP.
  • Addressed software configuration management issues in coordinate on with development team.
  • Implemented code builds and automated deployment procedures.
  • Resolved build and release dependencies in collaboration with other departments.
  • Outlined build and deployment procedures in consultation with developers.
  • Worked on Apigee Edge consists of API runtime, monitoring and analytics, and developer services that together provide a comprehensive infrastructure for API creation, security, management, and operations.
  • Examined and executed application-specific deploy processes.
  • Worked on Dynatrace APM end-to-end implementation
  • Had an experience in the ability Restart/bounce the Azure components/clusters/VMs.
  • Managed Azure cloud infrastructure from the Azure portal, PowerShell, or using the CLI.
  • Used Big Panda to identify actionable, intelligent alerts from observability tools so that IT-Ops and Service Operations teams can quickly triage and automate incident response workflows.
  • Used Big Panda accelerates remediation and reduces MTTR by automating key incident management steps, from ticket creation to automation of runbooks.
  • Environment: Tortoise SVN, Jenkins, Java/J2EE, ANT, MAVEN, GIT, OpenStack, Amazon EC2, Amazon Web Services, Puppet, Chef, Python Scripts, Shell Scripts, SonarQube, UNIX, JIRA, Python.

SRE DevOps Engineer/Kubernetes

AT&T
11.2019 - 04.2021
  • Working as DevOps Engineer role at onsite and ensure solution delivery aligned to project methodology. My key roles and responsibilities include:
  • Analyze, evaluate, design, and code for client specifications / requirements. Design and implementation of complex software systems
  • Implement coding methods in specific programming language (Java) to fulfill client required functionality
  • Had an experience with the Performance Settings and able to be tuning queries for popular database management systems like MySQL or Oracle.
  • Experienced in React JS and working with React Flux architecture
  • Worked closely with React Router for developing single page applications SPA's
  • Identified and fixed performance issues in a confidential instant of time by dynamic monitoring through catch point and new relic tools in a production environment.
  • Used Big Panda enables you to maintain service reliability, speed incident resolution, maximize IT investments, and scale incident management.
  • Included Mesos and Kafka for managing the real-time data streamlines under proper environments. Depended on Zookeeper for any assistance.
  • Performed day-to-day administration of the Service Now tool-maintained business services and configuration item relationships in Service Now tools
  • Coordinates service catalog options, including two-step checkout cart controls and variables.
  • Developed the backend using Groovy and Grails, Value object, and DAO. Used different design strategies like Fade pattern, Proxy commands patient to efficiently use resources.
  • Experience in deploying applications into Kubernetes using GitOps tools like ArgoCD.
  • Experienced in creating projects in Argo CD and deploying Argo CD into Kubernetes clusters from the scratch.
  • Coordinated installation of service now upgrades and or service packs
  • Worked on a mobile application that was developed with HAVA mixed with Kotlin using Android studio and WEB api with ,NET core2 using visual studio
  • Worked closely in the integration of service now with LDAP for authentication
  • Integrated Service Now with BMC remedy for ticket creation on change submit
  • Worked on migrating Dynatrace app monitoring to Dynatrace managed.
  • Worked and managed closely with application insight to monitor the live web applications, automatically detecting performance anomalies.
  • Worked and used Azure Insight application to understand the performance and usage of your live web application.
  • Monitored the Splunk infrastructure for capacity planning, scalability, and optimization
  • Implemented Hadoop clusters on processing big data pipelines using Amazon EMR and Cloudera whereas it depended on Apache Spark for fast processing and for the integration of APIs. Confidential the end, we deployed the above resources using Apache Mesos.
  • Experience on container management tools Docker, Mesos, Marathon and Kubernetes. Also used to manage clusters of nodes using docker swarm, compose, DC/OS and Kubernetes clusters.
  • Designed and configured Azure Virtual Networks (V-Nets), subnets, Azure network settings, DHCP address blocks, DNS settings, security policies and routing.
  • Deployed Azure IaaS virtual machines (VMs) and Cloud services (PaaS role instances) into secure V-Nets and subnets.
  • Provided overall management of the SPLUNK platform
  • Assisted with design of core scripts to automate SPLUNK maintenance and alerting tasks. Support SPLUNK on UNIX, Linux, and Windows-based platforms. Assist with automation of processes and procedures
  • Standardized and implement Splunk Universal Forwarder deployment, configuration and maintenance in Linux and Windows platforms
  • Extracted Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL and U-SQL Azure Data Lake Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in In Azure Databricks.
  • Responsible for implementing monitoring solutions in Ansible, Terraform, Docker, and Jenkins.
  • Automate Datadog Dashboards with the stack through Terraform Scripts.
  • Write terraform scripts for CloudWatch Alerts.
  • Configured MQ as Foreign JNDI Servers in J-boss and Installed/configured MQ client libraries and made MQ series for JBoss applications
  • Installed and administered many clustered web application servers JBoss Enterprise application Platform JBoss EWS Tomcat Glassfish hosted on RHEL/Windows platforms
  • Worked in container-based technologies like Docker, Kubernetes and OpenShift.
  • Point team player on OpenShift for creating new Projects, Services for load balancing and adding them to Routes to be accessible from outside, troubleshooting pods
  • Proficient with container systems like Docker and container orchestration like EC2 Container Service, Kubernetes, worked with Terraform.
  • Environment: Tortoise SVN, Jenkins, Java/J2EE, ANT, MAVEN, GIT, OpenStack, Amazon EC2, Amazon Web Services, Puppet, Chef, Python Scripts, Shell Scripts, SonarQube, UNIX, JIRA, Python.

Lead DevOps/ SRE Engineer

Red black Software
04.2019 - 11.2019
  • In solving complex problems with creative solutions, supporting development, Deployment operations in different environments.
  • Experienced in working on DevOps/Agile operations process and tools area (Code review, unit test automation, Build & Release automation, Environment, Service, Incident and Change Management).
  • Experience in cloud administration with AWS and Google Cloud Platform.
  • Hands on experience in provisioning and managing Hadoop clusters on Google Cloud Platform Compute Engine instances.
  • Worked on designing and developing the Real - Time Tax Computation Engine using Oracle, Stream Sets, Kafka, Spark Structured Streaming and MySQL
  • Involved in ingestion, transformation, manipulation, and computation of data using Stream Sets, Kafka, MySQL, Spark
  • Exposure to Mesos, Marathon & Zookeeper cluster environment for application deployments & Docker containers.
  • Experience working with Mesos/Marathon for Docker container orchestration and used Marathon UI to deploy applications and schedule long running jobs.
  • Experience on container management tools Docker, Mesos, Marathon and Kubernetes. Also used to manage clusters of nodes using docker swarm, compose, DC/OS and Kubernetes clusters.
  • Performed a POC to check the time taking for Change Data Capture (CDC) of oracle data across Strim, Stream Sets and DB Visit
  • Designed and developed AWS Cloud Formation templates to create custom VPC, Subnets, NAT to ensure deployment of web applications.
  • Designed AWS Cloud Formation templates to create custom sized VPC, Subnets, NAT to ensure successful deployment of Web applications and database templates.
  • Designed, built, support and maintain Splunk infrastructure in a highly available configuration
  • Performed installation, configuration management, license management, data integration, data transformation, field extraction, event parsing, data preview, and Apps management of Splunk platform
  • Standardized Splunk forwarder deployment, configuration and maintenance in Linux and Windows platforms
  • Experience in AWS platform and its features including IAM, EC2, EBS, VPC, RDS, Cloud Watch, Cloud Trail, Cloud Formation AWS Configuration, Autoscaling, Cloud Front, S3, SQS, SNS, Lambda and Route53.
  • Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.
  • Created CSR Installation of SSL certificates on JBoss EWS. Experience with Linux operating system tools scripting tools file permissions resource provisioning and troubleshooting in a virtual environment
  • Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on GCP (Google Cloud Platform). Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy.
  • Authoritative knowledge of the following languages: C# (C Sharp), ASP.NET, ADO.NET, HTML, XHTML
  • Working knowledge of Telerik Dev Craft Suite, specifically ASP.NET AJAX and Kendo UI libraries a strong asset
  • Involved in development of test environment on Docker containers and configuring the Docker containers using Kubernetes.
  • Developed the team's capabilities in data science and machine-learning, and apply them to create new data driven insights
  • Worked closely with development teams to ensure accurate integration of machine learning models into firm platforms
  • Spun up HDInsight clusters and used Hadoop ecosystem tools like Kafka, Spark and data bricks for real-time analytics streaming, Sqoop, pig, hive and Cosmos DB for batch jobs.
  • Had a knowledge of one or more open-source Machine Learning framework
  • Configured BGP routes to enable ExpressRoute connections between on premise data centers and Azure cloud.
  • Creating Storage Pool and Stripping of Disk for Azure Virtual Machines. Backup, Configure and Restore Azure Virtual Machine using Azure Backup.
  • Configure Window Failover Cluster by creating Quorum for File sharing in Azure Cloud.
  • Designed User Defined Routes with custom route tables for specific cases to force tunnelling to the Internet via On-premises network and control use of virtual appliances in the customer's Azure environment.
  • Designed and configured Azure Virtual Networks (V-Nets), subnets, Azure network settings, DHCP address blocks, DNS settings, security policies and routing.
  • Deployed Azure IaaS virtual machines (VMs) and Cloud services (PaaS role instances) into secure V-Nets and subnets.
  • Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment manager.
  • Setup GCP Firewall rules to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations drastically improving user experience and latency.
  • Deploy and monitor AWS resources (EC2, VPC, ELB, S3, RDS) using Chef and Terraform0.12.11
  • Implemented Terraform0.12.2 modules for deployment of various applications across multiple cloud providers and managing infrastructure.
  • Implemented Kafka and documented integration, authorization, and authentication of Kafka and other echo systems.
  • Deploy and monitor AWS resources (EC2, VPC, ELB, S3, RDS) using Chef and Terraform
  • Experience in Linux Administration, Configuration Management, Continuous Integration, Continuous Deployment, Release Management and Cloud Implementations.
  • In-depth understanding of the principles and best practices of Software Configuration Management (SCM) processes, which include compiling, packaging, deploying and Application configurations.
  • Working with the brand team and Artifactory team to ensure each guest has an excellent experience
  • Overseas the operations of the Artifactory at Magic Hat which include: a limited kitchen, bar, retail space, and tour program
  • Extensive experience in Setting up Application Tier, Build Controllers, Build Agents in Team foundation Server (TFS) 2010, 2012, 2013, 2015 and 2017.
  • Worked on many tools like HP Vugen12.0, 12.53, Controller, Dynatrace & Performance Center, Fiddler.
  • Have working knowledge of Dynatrace 6.2 to collect the Performance metrics and find the root cause of any issue by the help of pure path.
  • Developed relationships between business users and development team to implement and customize PCF platforms
  • Capacity Planning sessions for Infrastructure PCF.
  • Used AWS storage gateway and direct connect to get the client to PCF
  • Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on GCP groups.
  • Environment: Tortoise SVN, Jenkins, Java/J2EE, ANT, MAVEN, GIT, OpenStack, Amazon EC2, Amazon Web Services, Puppet, Chef, Python Scripts, Shell Scripts, SonarQube, UNIX, JIRA, Python.

Site Reliability Engineer (DevOps Engineer)

Regions Bank
11.2018 - 03.2019
  • Worked as a DevOps Engineer for Highmark Web based application. Planning and setting up of Continuous Integration for various properties on Jenkins with Commit, Component, Assembly, Deploy and Smoke jobs. Used Jenkins, Maven, and Chef for automating the developed Build. Used Hadoop Distribution Horton Works, CHD and Converted Ant Build project to Maven build on DevOps platform.
  • Responsibilities:
  • Involved in DevOps migration/automation processes for build and deploy systems.
  • Worked with dev teams on setting up custom data collectors and service endpoints to monitor their applications which APP D wouldn't capture out of the box.
  • Performed vulnerability analysis of mobile/embedded platforms, applications, protocols, and supporting infrastructure.
  • Experience in working with various CDC tools like Oracle Golden Gate, Stream Sets and Strim
  • Experience working on Vagrant boxes to setup a local Kafka and Stream Sets pipelines.
  • Worked on Continuous awareness of threats, vulnerabilities, and techniques in mobile security, web based micro-services and associated fields.
  • Worked closely with development teams, management, and enterprise partners to establish Mobile DevOps priorities and execute accordingly.
  • Created custom scripts to automate App D installation on various machine based on the input and requirement opting the Application server.
  • Developed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and maintenance of build jobs and Kubernetes deploy and services.
  • Designed and continually improve Continuous Integration within the Mobile development organization at TD
  • Scoped out and build the CI/CD infrastructure for the mobile application related activities
  • Worked with the mobile development and testing teams to help streamline workflows for the ongoing SDLC, sustaining/release activities
  • Drive architecture and functionality for the CI/CD environment for the mobile development team
  • Worked on cross-platforms Windows-Unix with TFS, Clear Case.
  • Configured TFS 2015/2013 Environment along with SharePoint Services & Reporting Services.
  • Worked on TFS 2012 Sandbox as well Azure.
  • Worked in R&D team which helps the business to have a competitive edge over its competitors. The R&D function that develops plans much ahead other functions.
  • Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on GCP
  • Worked with OpenShift platform in managing Docker containers and Kubernetes Clusters.
  • Reported status to Release Management. Responsible for program planning, design, inspection and adaption, synchronization, release planning and improvement planning. Worked with other technical teams for resolution.
  • Worked with execution team and tracking team to record risks, reports, and metrics.
  • Implement the Build automation process for all the assigned projects in Vertical Apps domain.
  • Involved in designing and documenting the deployment and migration process.
  • Actively involved in various production and lower-level environment deployment
  • Worked with SVN and GIT version controls.
  • Evolving new tools/methodologies to improve this existing process and show better results to all stakeholders.
  • Worked with dev teams on setting up custom data collectors and service endpoints to monitor their applications which APP D wouldn't capture out of the box
  • Setting up new development branches, merging branches, facilitating the releases.
  • Setting up the new repos, Managing the permissions for various GIT branches.
  • Creating GIT stashes.
  • Created post commit and pre-push hooks using Python in SVN and GIT repos.
  • Environment: Tortoise SVN, Jenkins, Java/J2EE, ANT, MAVEN, GIT, OpenStack, Amazon EC2, Amazon Web Services, Puppet, Chef, Python Scripts, Shell Scripts, SonarQube, UNIX, JIRA, Python.

DevOps Engineer

Wells Fargo
01.2016 - 09.2018
  • As a DevOps Consultant worked for DevOps Platform team responsible for Cloud Automation using Chef, involved in organizing product releases/build releases using build tools Ant or Maven, Integrated the build using Jenkins. Developed build and deployment scripts using Maven as build tool in Jenkins to move from one environment to other environments.
  • Responsibilities:
  • Responsible for Deployment Automation using multiple tools Chef, Jenkins, GIT, ANT Scripts.
  • Written Chef Cookbooks and recipes in Ruby to Provision several pre-prod environments consisting of Cassandra DB installations, WebLogic domain creations and several proprietary middleware installations.
  • Implemented RESTful Microservices using Spring Boot Framework. Generated Metrics with method level granularity and Persistence using Spring Actuator.
  • Used spring config server for centralized configuration and Splunk for centralized logging. Used Dockers and Jenkins for Microservices deployment.
  • Worked on establishing a streamlined release process for the development team from scratch.
  • Automated the build and release management process including monitoring changes between releases.
  • Created various ANT scripts to create multiple deployment profiles and deploy the applications to Apache Tomcat.
  • Participated in integration of applications with existing APIs.
  • Implemented integration of use case scenarios as per requirements.
  • Strong hands-on Camel Router techniques
  • Worked closely with Project Managers to establish and create & design the release plan.
  • Managed the source control using version controlling tools like SVN and GIT.
  • Implemented Infrastructure automation through Puppet, for auto provisioning, code deployments, software installation and configuration updates.
  • Configured Jenkins to implement nightly builds on daily basis and generated change log that includes changes happened from last 24 hours.
  • Connected continuous integration system with GIT version control repository and continually build as the check-in's come from the developer.
  • Responsible for design and maintenance of the Subversion/GIT Repositories, views, and the access control strategies.
  • Created branches and managed the source code for various applications in SVN and GIT.
  • Created various scripts in Python and Ruby for automation of various build processes.
  • Experience building large infrastructure for disaster recovery and multi data center strategy.
  • Involved in Building data backup/recovery strategy and plans.
  • Designed and implemented Subversion and GIT metadata including elements, labels, attributes, triggers and hyperlinks.
  • Writing Maven and Ant build tools for application layer modules.
  • Manage configuration of Web App and Deploy to AWS cloud server through Chef.
  • Environment: Java/J2EE, Subversion, Ant, Maven, Jenkins, GIT, SVN, Chef, Puppet, AWS, Python, Shell Scripting, Ruby.
  • Environment: Java/J2EE, Ant, Maven, Subversion, Jenkins, Clear Case, Clear Quest, UNIX, JUnit.

UNIX Admin

Scalar Soft Pvt, Ltd
08.2013 - 08.2014
  • Responsible for Development, UIT, SYTEM, UAT, STAGING, PRODUCTION Builds and Releases.
  • Assist with maintaining current build systems, developed build scripts, and maintained the source control system.
  • Responsible for CI environments (Jenkins, Nexus, SonarQube).
  • Developed build and deployment scripts using ANT and MAVEN as build tools in Jenkins to move from one environment to other environments.
  • Testing the application manually.
  • Performed weekly and on-call deployments of application codes to production environments.
  • Reviewed and modified project requirements, designs, and scope.
  • Provided technical guidance during requirements gathering and documentation.
  • Coordinated application release with developer, DBA, QA and project management teams.
  • Worked in cross-platform environments like Linux, UNIX, AIX and Windows.
  • Documentation of detailed build configuration build procedures and change history for releases.
  • Participated in the designing from initial stage of development and prepared the class and sequence diagrams.
  • Coordinating with development teams to perform builds and resolve build issues.
  • Analyze and create daily reports on the status of server backups on intranet and extranet.
  • Provide complete phone support to customers.
  • Troubleshoot tickets on the help desk tracking system.
  • Configured Environment, run unit tests and created reports by using Maven.
  • Environment: Java, J2EE, SVN (Subversion), Hudson Ant, Maven, Jenkins, JIRA, Shell/Perl Scripting, WebSphere, UNIX

Education

Master's degree - Electric and Computer Science

Texas A&M University
05.2016

Bachelor's degree - Electrical Engineering

KL University
05.2013

Skills

  • Programming in C, C, and Java
  • Skilled in utilizing Ant and Maven for software development
  • Software provisioning expertise
  • Proficient in Jenkins, Hudson, and Bamboo
  • Proficient in Tomcat, JBoss, WebLogic, and WebSphere
  • Project release management
  • Experience in managing issues using Bugzilla and JIRA
  • Experienced with CVS and GIT
  • Oracle database management
  • Proficient in Windows variants

Certification

Certified Kubernetes Admin

Timeline

Lead Site Reliability Engineer (SRE)/Apigee

Freedom Mortgage
05.2021 - Current

SRE DevOps Engineer/Kubernetes

AT&T
11.2019 - 04.2021

Lead DevOps/ SRE Engineer

Red black Software
04.2019 - 11.2019

Site Reliability Engineer (DevOps Engineer)

Regions Bank
11.2018 - 03.2019

DevOps Engineer

Wells Fargo
01.2016 - 09.2018

UNIX Admin

Scalar Soft Pvt, Ltd
08.2013 - 08.2014

Master's degree - Electric and Computer Science

Texas A&M University

Bachelor's degree - Electrical Engineering

KL University
IRFAAN Syed