Experienced L2 Database Administrator with a demonstrated history of working in the information technology and service industry in Big data analytics Domain skilled in Redshift and have an experience of 3.9 Years. Pursued Bachelor in Mechanical Engineer and comes with 3.9 years of experience in Technically sophisticated & astute Bigdata Professional, with total 3.9 years of experience and currently associated with Wissen Infotech Pvt Ltd. Experience in Implementation, Administration and custom application development for major GE Healthcare, GEHC( GE HEALTHCARE). Holds sound knowledge in Redshift.
Redshift
AWS Services
S3
RDS
SQL
Linux/Unix
Windows
GitHub
AWS Console
AWS Cloud Watch
Yarn UI
SQL Workbench
Dbeaver
Zeppelin
undefined• User Management
• Access Management
• Object Management
• Enabled WLM for the cluster and created queues and assigned groups to the queues.
• Changing the WLM setup as per schedule.
• Worked on resize of cluster both Elastic and Classic
• Automated wlm changes as per scheduled using auto WLM workload scheduler.
• Changing the WLM allocation on adhoc user requests
• Worked on Poc on enabling AUTO WLM on priority based in lower environments.
• Scheduled Monitoring scripts to monitor long running, blocking sessions, idle sessions ,data growth and inform users to take immediate action to reduce the waiting time of queries.
• Enabled concurrency scaling for reporting queue.
• Automated process of vacuuming tables during maintenance window using python script.
• Implemented Deep copy on tables and automated the process by creating a stored procedure and to run parallel using python to decrease the maintenance window.
• Automated process of analyzing tables parallel in batches daily.
• Automated compression check on tables to analyze tables and prepare a report which gives us information about columns which needed to be compressed.
• Created and automated script to load history of system tables as we have retention period on system tables.
• Copying data from one environment to another using unload/copy on adhoc user requests.
• Automated data copy from one environment to another using python.
• Experience in writing queries to get the metrics of queries using multiple system tables.
• Monitoring Redshift Advisor and implementing them for performance improvements.
• Experience on working with external sachems and external tables.
• Adhoc execution of queries and stored procedures on user requests.
• Experience on tuning query to improve the performance based on the explain plan of query.
• Monitoring the cluster performance using Redshift Console and cloud watch alarms.
• POC on end to end performance on Data-sharing and worked with AWS on all the issues which are caused during POC.
• Developed a dashboard on github using static webpages with the cluster status using python and jupyter book and jupyter notebooks.
• Involve in cluster build activities of EMR clusters.
• Having good knowledge on Terraform.
• Responsible for creating monitors, alarms, and notifications for EC2 hosts using cloud watch.
• Monitoring the performance of EC2 instances via Cloud watch.
• S3 – Working with S3 to create the buckets to store objects.