Summary
Overview
Work History
Education
Skills
Accomplishments
Certification
Seminars/Classes
Additional Information
Projects of Interest
Udemy Authored Courses
Timeline
Generic

Eric Bruno

Santa Clara,CA

Summary

Senior Software Engineer with over 40 years of experience in the following disciplines: requirement analysis, software maintenance, systems analysis, systems design, systems engineering, software development, DevOps. intelligence systems, communications and network management systems.

Over 14 year’s project management experience of small (1 - 5 person) software development and maintenance efforts. Over 2 years of experience in project management of consolidated software development and system administration efforts.

Note: C/C++ work was primarily on the projects and activities from 1993 - 2015

  • LinkedIn https://www.linkedin.com/in/eric-bruno-1795101
  • GitHub https://github.com/ebruno

Overview

46
46
years of professional experience
1
1
Certification

Work History

Senior Software Engineer

Splunk
San Jose, CA
08.2021 - 05.2024

Supported Splunk’s Cloud Security Automation efforts.

Continued the enhancement of the Python based application (from the previous contract position with ITCO) that automates testing and verification for CIS (Center for Internet Security) and STIG (Security Technical Implementation Guides) compliance. Deployed the application across the Splunk AWS and GCP production, staging and development environments for Splunk Cloud. Management changes occurred in Feb 2023 as part of the management change it was decided to deprecate the internal tooling and use COTS solutions instead.

From Feb 2023 to Nov 2023 evaluated VMware Workspace One for possible use by Splunk. It was determined that Workspace One would not meet Spunk's requirements.

From Nov 2023 - April 2024

Evaluating options to improve mobile device security with Splunk's existing Google Workspace environment.

March 2024 - May 2024

Tasked with deploying a COTS solution for an internal team with in Splunk for security finding reports.

Software Engineer

ITCO Solutions
San Jose, CA
02.2021 - 08.2021

Supporting Splunk’s Cloud Security Automation efforts.

Enhanced a Python based application that automates testing for CIS (Center for Internet Security) and STIG (Security Technical Implementation Guides) compliance. The application injects JSON formatted results in to Splunk’s Skynet infrastructure. The python app was enhanced to support and both CIS and STIG over 170 STIG’s tests were developed as BETA versions for testing in the Splunk IL5 environments. Improvements to CI/CD pipeline were implemented that include basic sanity and linting of both the Python app and the embedded shell scripts in yaml files that define the individual tests. Vagrant and Docker environments were implemented to provide local test and verification environments on MacOS X, Linux and Windows platforms.

Software Engineer

Collabera Inc.
Santa Clara, CA
12.2019 - 06.2020

Supporting Uber's Compute Cluster Platform Group.

Add functionality/enhancements to Uber internal custom tooling primary written in python. Developed Puppet modules that run under various Puppet versions for Debian based systems. The puppet work required development of ruby based puppet functions and a ruby based custom facts generator. Additionally working migrating an application from using make to bazel (https://bazel.build). Developed an small Ansible playbook that can add sysctl (sys file system) information to Ansible facts and allow setting/getting of specific sysctl information.

Software Engineer

Astreya Partners Inc.
San Jose, CA
05.2018 - 06.2019

Software Engineer Supporting Dropbox Corp DevOps. DropBox uses MacPros (late 2013)'s to run ESXi Servers to virtually MacOS. It is a legal requirement to use Apple hardware in order to virtualize MacOS. Developed a solution that allows network based installation of ESXi Server software with automatic reconfiguration with the same static IP's and VMware license the was assigned to machine before the machine required a reinstallation. Apple systems do not support PXE boot, but a proprietary protocol called NetBoot. This solution was able to dynamically load an ipxe (www.ipxe.org) shim overwriting in memory only, the Apple Netboot boot shim. This allowed ESXI to be installed over the network. The solution is hosted on a Linux VM and is implemented as five python processes (some run as daemons) as well as python web applications using Apache2 and the WSGI gateway. Scapy (https://scapy.net) was used to capture and dynamic rewrite packets to emulate parts of the NetBoot protocol. The solution can run on VMware Workstation, VWware Fusion or as vApp under VMware vCenter.

Senior Software Engineer

Avaya
Santa Clara, CA
07.2017 - 02.2018

As a member of the Avaya Session Boarder Controller product ,(https://www.avaya.com/en/product/avaya-session-border-controller-for-enterprise) team my focus was on porting the existing VM based version of the product to multiple cloud platforms. This includes enhancements of the VM for actually operations in the Cloud and in the DevOPS/Build environment. My primary cloud platform area of responsibility is Google Cloud Platform. Last half of 2017 developed using Python and Bash shell based scripts to automate the building of KVM’s and VMware VM’s using ISO or PXEBOOT as input sources. The KVM solution used the libvirt python binding , GCP gcloud and gsutil commands. The VMware VM solution used the pyvmomi API to allow automated building from using ISO’s on a single ESXi server. This allows building using the existing build artifacts from the current build system. The scripts are designed to run manually or by continuous integration tools like Bamboo or Jenkins.

Senior Software Engineer

Avaya
Santa Clara, CA
04.2015 - 06.2017

The SDN Surge solution provides a technology set that creates encrypted zones, to isolate, filter, and encrypt data flows from a device to a destination. Security profiles determine who or what a device communicates with. Responsible for the base host virtualization platform (Operation System and Virtualization) and the DevOps Environment. Surge was a green field product. The initial design was to use four virtual machines hosted on RHEL 7 operating system using the KVM libvirt framework. As the design evolved the number of Virtual Machines increased to eight, one of which was provided by an OEM vendor. Development work started in May 2015 as of June 2017 the following functionality was supported. Git was used as the source code management system each Virtual Machine was stored in a separate git repository so that the developers were able to develop and build a virtual machine with a minimum of resources. The git submodule feature is used by an Appliance wrapper git repository to bring in the virtual machines needed for the specific appliance variant. All components are buildable at the Unix command line. Bash scripting, Makefiles and python scripts were used in order to keep the required tool set minimal. The bash scripts supported parameter files and command line options. The bash indirect variable addressing feature is used to provide a straight forward implementation that supports default values, parameter file, command line precedence overrides for all scripts. The master build scripts are wrapped using Jenkins to provide build automation. JIRA is queried using the REST API with python scripts to obtain Sprint information and list of JIRA’s that are resolved for each build. Each build is automatically tagged. Automation scripts exist to remove tags and branches automatically when they no longer needed. The VM’s were built on RHEL/CentOS 5, RHEL 6 and RHEL 7 virtual machines. The make files leveraged Python scripts to generate ISO or virtual machine images via Jenkins jobs in order to create Virtual Machines automatically. The Linux libguestfs tool set is used to open existing base VM disk images and to provision them using yum’s installroot feature to reduce VM provisioning time during the build process. Both Production/Public and Private/Developer builds were supported via Jenkins. Installation is supported by DVD and USB media or PXESERVER, legacy BIOS and UEFI boot is supported using DVD and PXESERVER. Once a build is complete the necessary build artifacts were pushed to multiple PXE servers. The PXE servers have the capability to automatically update the PXEBOOT menus (GRUB 2 and syslinux formats) with the list of available builds on each PXE server. The PXEBOOT update scripts use the Jenkins REST API to automatically update the available builds in associated Jenkins job drop down menu lists. Using Jenkins the developers and QA/PV/SV staff were able to schedule specific builds to be installed on specific machines and run the phase 1 configuration. The product used HP servers, using the HP iLO REST API only the iLO IP address, admin account id and password are required, to determine the MAC address of the Ethernet port to be used for PXE installation. Once the MAC address is known, a custom PXEBOOT menu can be programmatically generated to automatically install a specific build on a specific server. Using the HP iLO Virtual Serial Console (VSP), the installation progress can be monitored for completion. Once the installation has completed successfully python pexpect and HP iLO VSP were used to orchestrate initial system configuration using the same commands as a user would entire. The build and deployment requirements came from the R&D and QA teams I was responsible for 100% of the design and 76% of the implementation. A contractor under my direct supervision developed the remaining 25% of the functionality. The installation of software using RHEL Kickstart technology was designed and implemented. Operation System in place OS upgrades is supported by both media and PXEBOOT installation. All packages developed by R&D teams for the product are installed via RPM’s. Two hardware platforms are supported by Surge, I developed a python script that would examine the /sys file system to determine available resources (CPU, CPU Speed, Memory ...etc) and then used a database stored in /etc to determine the required settings for VM’s (memory, number of CPU’s …etc). The Libvirt python API was used to update the VM’s configuration settings during installation. The configuration script is also run at system shutdown and startup via systemd to insure the VM’s are configured correctly for hardware environment. Openvswitch was used as part of the hypervisor platform. Startup and shutdown scripts were developed to install custom flows between virtual machines and devices as required.

Senior Software Engineer

Avaya
Santa Clara, CA
12.2009 - 04.2015

The Avaya Identity Engines Ignition Server is an access control solution.Functioned as the primary platform engineer and DevOps engineer.IDEngines was primarily implemented in C++ and C.

  • Release 9.1.0 The base operating system was upgraded from RHEL 6.3 to RHEL 6.5 for both development and deployed environments. The virtual machine which hosts the Ignition Server was made generic. The build and automation systems were enhanced to create a generic RHEL 6.N based VM that can host other products.
  • Two new products were released based on the new virtual machine. A new standardized command line interface had to be developed which was implemented in python. The virtual machine is designed to allow multiple products/solutions to be installed at build time but independently licensed/activated. The architecture allows for multiple solutions to implemented and activated on the same virtual machine if required. New virtual machine based developer environments were implemented and provided to the respective development teams.
  • Release 9.0.0 Responsible for upgrading the Operating system from RHEL 5.5 to RHEL 6.3. This involved upgrading 18 libraries, developing the master development environment, system build environment and production runtime environments. These environments were used by the development and Release Engineering teams. The production runtime environment was tested by PV/QA and is the environment the product ships with to the customer. The environments are virtual machines that can be hosted on VMware, Hyper-V, Xen and VirtualBox as well as actually hardware.
  • Release 8.0.0 Developed an user land extension to the Linux proc file system to allow richer process management. In the 9.0.0 release this was extended to use the Linux inotify system to provide high grained near real-time control of Apache HTTPD and Tomcat servers as well other services including configuration files. The library was implemented in C and C++ and is an extensible set of classes for managing Sys V RC based services.
  • Release 8.0.0 of the Identity Engines portfolio, developed the build and software development environments for the Avaya Captive Portal solution which is based on pfSense and FreeBSD.
  • Release 8.0.0 Developed the functionality that would allow the under lying Red Hat Enterprise Linux (RHEL) OS Distribution and proprietary (IDEngines) application solution to be upgraded using a common packaging solution.

Senior Software Engineer

Nortel
Santa Clara, CA
11.2005 - 12.2009

Functioned as the R&D Effectiveness Prime for the NSNA/Ignition Server, NSNA/SNAS and Nortel VPN Gateway products. Nortel Identity Engines Ignition Server product. The first Nortel release of Identity Engines was 6.0. Responsible for segregating two products out of a common CVS stream that were sold by Nortel to Radware (http://www.radware.com) the goal was provide a fully functional source code repository, product build and software development environment which provided the purchased products but did not provide the products or versions of products that were not purchased by Radware. The challenges lay in the fact the products are branches from a single core parent and removal of unused components was never done over the multi-year multiple releases of the products. Design Prime for the NSNA 3.0 release for the Nortel Secure Network Access Switch. This release was canceled when IDEngines was purchased by Nortel. Responsible for controller side modifications for release 1.6.1.2 of NSNA controller. The modifications provided a generic system (machine) and user logon capability. The majority of the software development is done in the Erlang programming language (http://www.erlang.org)

  • Release 6.0 Responsibilities include creating the customized Red Hat Enterprise Linux 5.3 distribution and developing the integration for the Ignition Server which was originally hosted on NetBSD. In order to accomplish this task a new development and production build environment had to be created that merged the existing Ignition Server build process, with one that produced that customized RHEL distribution with the final output being an bootable installation ISO image. The installation used a modified Anaconda installer and it's Kickstart functionality to provide an operational, debug and development variants. I was also responsible for insuring the VMware image was VMware certifiable and producing the required test results for submission to VMware.

Software Architect

Solution-Soft Systems
Santa Clara, CA
06.1999 - 11.2005

The SafeVelocity product is an enhanced FTP replacement that was implemented from the ground up using C/C++ on Unix, Linux to take advantage of current operating systems capabilities of the late 1990's. The code base takes advantage of POSIX threads on HP-UX, Solaris, AIX and Tru64 UNIX platforms and was implemented in Java taking advantage of Java RMI on Windows NT and Linux. The SafeVelocity product provides automatic available bandwidth sensing with automatic smart compression on/off features both before and during transmission of files to minimize system load, bandwidth utilization and total transfer time. Java Native Interface (JNI) (C++) was utilized on Windows NT and Unix to provide access to operating system and product functionality that was not available at the time in the Java SDK. Developed Win32 services on NT/2000, one service embeds a Java Virtual Machine. Also developed a Win32 system control panel which interacts with the service using Win32 mail slots.

  • Lead software architect for the SafeVelocity Product and e-Core Data Transfer library. Responsibilities included directing the efforts of 2 senior and 2 junior software developers as well as performing design and development work. The SafeVelocity product is comprised of over 14 cooperative multi-threaded C and Java applications, Win32 services and control panels (over 250,000 lines of code). Response for the Win32 version of the SafeCapacity product, the user front end and the final integration of the product components into a customer distribution.
  • Ported the SafeVelocity product to Linux under the 2.6 kernel (Fedora Core 3) and Native POSIX Threads Library (NPTL) and to Redhat Linux Advance Server 3.0 under the 2.4 kernel with NPTL support.
  • Integrated Macrovision FlexNet support for Linux into SolutionSoft's licensing library.
  • Upgraded the SafeVelocity product to support Microsoft Windows Server 2003. Enhanced the SafeVelocity installation process to use Microsoft Windows Installer (MSI).

Developed an Apache 2.0 module which enables Apache to serve as a license server. The currently supported platforms are Solaris (SPARC and x86), HP-UX 11.00, Linux and FreeBSD.The SafeVelocity product is still offered for sale: https://solution-soft.com/products/safevelocity.

Senior Software Engineer/Manager

Sterling Software, Inc NASA Ames Division,
Mountain View, CA
06.1993 - 01.1999

The NASA Internet project had a staff of over 20 individuals performing network connectivity requirements analysis and management, network design engineering and software development.

  • Project manager for the NASA Internet project in support of the NASA Integrated Services Network (NISN).
  • Group Lead for various groups from June 1996 until 1 November 1998. Network Systems Support Group, Applications Development Group and the Network Design Engineering Group.
  • NISN responsibilities include being a member of the NISN Wide Area Network (WAN) Network Management Consolidation team and overseeing the Network Systems Support (NSS) group development of real-time web based graphical monitoring and analysis tools for Asynchronous Transfer Mode (ATM) communications used by NISN WAN. The Web based tools were developed using the University of California Davis SNMP tools and the Generic Logic (www.genlogic.com) Glg Toolkit.

Software Engineer/Manager

Sterling Software Intelligence and Military Division
Bellevue, NE
04.1981 - 06.1993

Project manager for the Single Source Processor Signals Intelligence (SIGINT) (SSP-S) program. The SSP-S is implemented as a distributed application system. Technical responsibilities included the communications front-end processor (FEP). On-site representative for:

  • Responsible for the direction and implementation of changes to the FEP. The source code base exceed 300,000 lines of FORTRAN and 85,000 lines of MACRO-11. Releases occurred 2-3 times per year.
  • Designed an IBM PC based AUTODIN communications protocol simulator under MS-DOS 3.2 which functioned at the AUTODIN message level for testing, system acceptance testing, and training. This simulator was used as the reference standard for release testing of system components for SSP-S.

During the period 1981 until 1988 supported the Communications Support Processor (CSP) in the European Theater responsible for:

  • Maintaining the operational status of the CSP system by identifying, analyzing, reporting, and resolving CSP software problems.
  • Installing, integrating, and implementing all new CSP software releases or baseline modifications.
  • User training on an on going basis.
  • Headquarters, United States European Command (USEUCOM), Vaihingen, West Germany.
  • United States Air Force Europe (USAFE) Combat Operations Intelligence Center (COIC)
  • USAFE Tactical Fusion Center (TFC)
  • Headquarters, United States Army Europe (USAREUR)

Software Engineer/Developer

System Research Labs, Inc
Beavercreek, OH
06.1978 - 03.1981

Software Developer on the following projects:

  • Strategic Air Command (SAC) Technical Electronic Intelligence (ELINT) Processing System (STEPS). The solution was PDP-11/RSX11M based. Responsible for design, implementation and FORTRAN coding of a database management system for the STEPS and an analysis package for a specific type of ELINT data using interactive vector graphics.
  • Computer Controlled Antenna Measurement System (CCAMS) project (HP 21MX/HP RTE). Responsible the design and implementation of device drivers, foreground and background tasks for a real time environment, and performance of system generation for the production systems.
  • Involved with several small scientific applications packages at Wright Patterson Air Force Base, Ohio.

Education

Bachelor of Science - Computer Science

Rose-Hulman Institute Of Technology
Terre Haute, IN
05.1978

Skills

  • C/C
  • Erlang
  • Java
  • PHP
  • Python
  • Golang
  • Ruby
  • Python (scapy https://scapynet)
  • JavaScript
  • Jython
  • TCL/TK
  • Unix Shell Scripting
  • Puppet
  • Ansible
  • TCP/IP
  • Google Cloud CLI/Python API's
  • Atlassian (JIRA, Confluence, Crucible)
  • Jenkins
  • Jenkins (REST API)
  • Git
  • Gitlab and gitlab CI/CD
  • CircleCI
  • Sun Java Native Interface (JNI) Toolkit (Unix and Win32)
  • Oracle (PL/SQL), Oracle Application Server
  • Subversion
  • Sonatype (Nexus)
  • VMware ESXI Server/VCenter
  • VMware Workstation/Fusion
  • RHEL KVM
  • Libvirt API
  • Linux
  • Unix (Solaris, HP-UX, FreeBSD)
  • Mac OS X
  • Windows NT
  • MySQL/MariaDB
  • Antlr
  • Ant
  • Doxygen
  • Bazel
  • Docker Containers

Accomplishments

Accomplishments at Nortel
  • Implemented standardize VMware images used by the NSNA and NVG teams both Nortel and partners for product development, maintained and testing. This reduces the time for developer to bring up a development environment and provides less flux in each developer's environment.

  • Specified hardware for development systems for NVG and NSNA. For NSNA added Quad NIC cards which allows use of VMware instead of physically boxes in the developers test bed.  This reduced the number of machines needed by each developer by an average of 3-5 machines per developer.
 Accomplishments at Avaya
  • During the transition from Nortel to Avaya in 2009 operational and capitol expense budgets were under tight constraints.  Developed the plan to upgrade existing IBM 3650 based Nortel appliances memory and storage so that they could be used for IDEngines development.  The upgrade cost per machine was was approximately US $1500.00 vs US $6000.00 for new machines. Approximately 20 machines were upgrade.  The upgraded machines still in use during 2016.

Certification

  • Links to certificates can be found on my LinkedIn profile https://www.linkedin.com/in/eric-bruno-1795101
  • AWS Fundamentals: Going Cloud-Native by AWS on Coursera on Jun 2020
  • AWS Fundamentals: Addressing Security Risk by AWS on Coursera on Jun 2020
  • AWS Fundamentals: Migrating to the Cloud by AWS on Coursera on Jun 2020
  • AWS Fundamentals: Building Serverless Applications by AWS on Coursera on Jun 2020
  • Programming with Google Go, a 3-Course specialization by University of California, Irvine earned on July 2, 2019
  • Getting Started with Go by University of California, Irvine on June 25, 2019
  • Functions, Methods and Interfaces in Go by University of California, Irvine July 1 2019
  • Concurrency in Go by University of California, Irvine July 2, 2018
  • Architecting with Google Cloud Platform, a 6-course specialization by Google Cloud on Coursera. Specialization Certificate earned on February 27, 2018
  • Google Cloud Platform Fundamentals: Core Infrastructure by Google Cloud on Coursera. Certificate earned at Tuesday, February 20, 2018
  • Essential Cloud Infrastructure: Foundation by Google Cloud on Coursera. Certificate earned at Friday, February 23, 2018
  • Essential Cloud Infrastructure: Core Services by Google Cloud on Coursera. Certificate earned at Saturday, February 24, 2018
  • Elastic Cloud Infrastructure: Scaling and Automation by Google Cloud on Coursera. Certificate earned at Monday, February 26, 2018
  • Elastic Cloud Infrastructure: Containers and Services by Google Cloud on Coursera. Certificate earned at Monday, February 26, 2018
  • Reliable Cloud Infrastructure: Design and Process by Google Cloud on Coursera. Certificate earned at Tuesday, February 27, 2018
  • Data Engineering on Google Cloud Platform, a 5-course specialization by Google Cloud on Coursera. Specialization Certificate earned on March 11, 2018
  • Google Cloud Platform Big Data and Machine Learning Fundamentals by Google Cloud on Coursera. Certificate earned at Thursday, March 8, 2018
  • Leveraging Unstructured Data with Cloud Dataproc on Google Cloud Platform by Google Cloud on Coursera. Certificate earned at Friday, March 9, 2018
  • Serverless Data Analysis with Google BigQuery and Cloud Dataflow by Google Cloud on Coursera. Certificate earned at Saturday, March 10, 2018
  • Serverless Machine Learning with Tensorflow on Google Cloud Platform by Google Cloud on Coursera. Certificate earned at Sunday, March 11, 2018
  • Building Resilient Streaming Systems on Google Cloud Platform by Google Cloud on Coursera. Certificate earned at Sunday, March 11, 2018
  • Getting Started with Google Kubernetes Engine by Google Cloud on Coursera. Certificate earned at Tuesday, March 6, 2018
  • VMware Technical Sales Professional 5 Dec 2012

Seminars/Classes

Google Cloud Cloud OnBoard February 13, 2018

Spunk Fundamentals 1

Additional Information

Avaya purchased Nortel Enterprise Solutions in 2009. Avaya Networking and the Identity Engines product were sold to Extreme Networks in 2017. Nortel purchased the Intellectual Property of Identity Engines in the fall of 2008.

Projects of Interest

  • Sept/Oct 2020, Helped out evolute.io (https://www.evolute.io) in packaging their proprietary software and two FOSS projects in to RPMS that build on CirceleCI and then publish the build artifacts to an Azure Storage fileshare. A proof of concept was done of a virtual machine on Azure to publish the RPM's via https as a yum/dnf repository.
  • Using a ISO image from an different build process, created a python based application using the pyvmomi API that would talk to a VMware ESXI server (with out requiring VCenter) that could upload an ISO (if not present) to be used for installation, creation of a new VM from an existing VM (template, with out requiring VCenter). The python script using the pyvmomi API’s would obtain the VM’s configuration then create a new VM based on the configuration and then attaches the ISO. Once the ISO was attached, the VM would be powered on. The application would then wait for the installation to complete and the VM to power off automatically. Once the VM powered off the VM’s disk(s) would be exported and downloaded. Using a specified XML OVF file as a template the OVF file would be updated to use the download disk and to create a manifest with SHA values, finally an OVA would then be produced. This took about one month to develop. The application had full logging, command line support and had hooks to allow integration with continuous integration services such as bamboo and Jenkins.
  • This is basically the same functionality as previous project but focused on RHEL using KVM with the target being Google cloud. The libvirt python API is used by the application. The ISO would be transferred to the KVM HOST (local (same machine the script was running on) or remote). The application used a template VM and then modified the XML based on the system model/target settings for memory, CPU, NICS …etc.) The disk size was based on the model. Next the XML was correctly configured using the python XML package API’s the VM was created and installation started. Once the installation was completed and the VM automatically powered off, the disk was converted to the format required for upload to Google Cloud. The disk was uploaded as part of a tar.gz file, the next step was to automatically create a Google compute image that could be used to create a Google compute instance. The script had the capability to create the instance and configure the instance’s Network based on the model selected. The Google SDK python API’s required a version of python 2.7 which is not in the RHEL standard distribution, so it was elected to use the Google cloud gcloud and gsutils via bash script to manage the image and instance creation. The scripts are designed to run manually or by continuous integration tools like Bamboo or Jenkins. Note for both project 1 and project 2 the end goal was to enhance them to usable for automated configuration and regression testing.
  • DropBox uses MacPros (late 2013)'s to run ESXi Servers to virtually MacOS. It is a legal requirement to use Apple hardware in order to virtualize MacOS. Developed a solution that allows network based installation of ESXi Server software with automatic reconfiguration with the same static IP's and VMware license the was assigned to machine before the machine required a reinstallation. Apple systems do not support PXE boot, but a proprietary protocol called NetBoot. This solution was able to dynamically load an ipxe (www.ipxe.org) shim overwriting in memory only, the Apple Netboot boot shim. This allowed ESXI to be installed over the network. The solution is hosted on a Linux VM and is implemented as five python processes (some run as daemons) as well as python web applications using Apache2 and the WSGI gateway. Scapy (https://scapy.net) was used to capture and dynamic rewrite packets to emulate parts of the NetBoot protocol. Threads and the inotify subsystem are used in the solution. The python applications are managed by systemd as both services and timers as required. Each application is packaged as a Ubuntu compatible debian package. The solution can run on VMware Workstation, VWware Fusion or as vApp under VMware vCenter. Debian installer preseed files are used to automate creation of the virtually machine and installation via network installation. The build systems creates a Ubuntu compatible debian repository that can be used via the network or from media.)

Udemy Authored Courses

I have authored the following classes that are currently available on Udemy:

  • Advanced BASH 4.0 Scripting (https://www.udemy.com/course/advanced-bash-4-scripting/)

Timeline

Senior Software Engineer

Splunk
08.2021 - 05.2024

Software Engineer

ITCO Solutions
02.2021 - 08.2021

Software Engineer

Collabera Inc.
12.2019 - 06.2020

Software Engineer

Astreya Partners Inc.
05.2018 - 06.2019

Senior Software Engineer

Avaya
07.2017 - 02.2018

Senior Software Engineer

Avaya
04.2015 - 06.2017

Senior Software Engineer

Avaya
12.2009 - 04.2015

Senior Software Engineer

Nortel
11.2005 - 12.2009

Software Architect

Solution-Soft Systems
06.1999 - 11.2005

Senior Software Engineer/Manager

Sterling Software, Inc NASA Ames Division,
06.1993 - 01.1999

Software Engineer/Manager

Sterling Software Intelligence and Military Division
04.1981 - 06.1993

Software Engineer/Developer

System Research Labs, Inc
06.1978 - 03.1981

Bachelor of Science - Computer Science

Rose-Hulman Institute Of Technology
Eric Bruno