Senior Software Engineer with over 40 years of experience in the following disciplines: requirement analysis, software maintenance, systems analysis, systems design, systems engineering, software development, DevOps. intelligence systems, communications and network management systems.
Over 14 year’s project management experience of small (1 - 5 person) software development and maintenance efforts. Over 2 years of experience in project management of consolidated software development and system administration efforts.
Note: C/C++ work was primarily on the projects and activities from 1993 - 2015
Supported Splunk’s Cloud Security Automation efforts.
Continued the enhancement of the Python based application (from the previous contract position with ITCO) that automates testing and verification for CIS (Center for Internet Security) and STIG (Security Technical Implementation Guides) compliance. Deployed the application across the Splunk AWS and GCP production, staging and development environments for Splunk Cloud. Management changes occurred in Feb 2023 as part of the management change it was decided to deprecate the internal tooling and use COTS solutions instead.
From Feb 2023 to Nov 2023 evaluated VMware Workspace One for possible use by Splunk. It was determined that Workspace One would not meet Spunk's requirements.
From Nov 2023 - April 2024
Evaluating options to improve mobile device security with Splunk's existing Google Workspace environment.
March 2024 - May 2024
Tasked with deploying a COTS solution for an internal team with in Splunk for security finding reports.
Supporting Splunk’s Cloud Security Automation efforts.
Enhanced a Python based application that automates testing for CIS (Center for Internet Security) and STIG (Security Technical Implementation Guides) compliance. The application injects JSON formatted results in to Splunk’s Skynet infrastructure. The python app was enhanced to support and both CIS and STIG over 170 STIG’s tests were developed as BETA versions for testing in the Splunk IL5 environments. Improvements to CI/CD pipeline were implemented that include basic sanity and linting of both the Python app and the embedded shell scripts in yaml files that define the individual tests. Vagrant and Docker environments were implemented to provide local test and verification environments on MacOS X, Linux and Windows platforms.
Supporting Uber's Compute Cluster Platform Group.
Add functionality/enhancements to Uber internal custom tooling primary written in python. Developed Puppet modules that run under various Puppet versions for Debian based systems. The puppet work required development of ruby based puppet functions and a ruby based custom facts generator. Additionally working migrating an application from using make to bazel (https://bazel.build). Developed an small Ansible playbook that can add sysctl (sys file system) information to Ansible facts and allow setting/getting of specific sysctl information.
Software Engineer Supporting Dropbox Corp DevOps. DropBox uses MacPros (late 2013)'s to run ESXi Servers to virtually MacOS. It is a legal requirement to use Apple hardware in order to virtualize MacOS. Developed a solution that allows network based installation of ESXi Server software with automatic reconfiguration with the same static IP's and VMware license the was assigned to machine before the machine required a reinstallation. Apple systems do not support PXE boot, but a proprietary protocol called NetBoot. This solution was able to dynamically load an ipxe (www.ipxe.org) shim overwriting in memory only, the Apple Netboot boot shim. This allowed ESXI to be installed over the network. The solution is hosted on a Linux VM and is implemented as five python processes (some run as daemons) as well as python web applications using Apache2 and the WSGI gateway. Scapy (https://scapy.net) was used to capture and dynamic rewrite packets to emulate parts of the NetBoot protocol. The solution can run on VMware Workstation, VWware Fusion or as vApp under VMware vCenter.
As a member of the Avaya Session Boarder Controller product ,(https://www.avaya.com/en/product/avaya-session-border-controller-for-enterprise) team my focus was on porting the existing VM based version of the product to multiple cloud platforms. This includes enhancements of the VM for actually operations in the Cloud and in the DevOPS/Build environment. My primary cloud platform area of responsibility is Google Cloud Platform. Last half of 2017 developed using Python and Bash shell based scripts to automate the building of KVM’s and VMware VM’s using ISO or PXEBOOT as input sources. The KVM solution used the libvirt python binding , GCP gcloud and gsutil commands. The VMware VM solution used the pyvmomi API to allow automated building from using ISO’s on a single ESXi server. This allows building using the existing build artifacts from the current build system. The scripts are designed to run manually or by continuous integration tools like Bamboo or Jenkins.
The SDN Surge solution provides a technology set that creates encrypted zones, to isolate, filter, and encrypt data flows from a device to a destination. Security profiles determine who or what a device communicates with. Responsible for the base host virtualization platform (Operation System and Virtualization) and the DevOps Environment. Surge was a green field product. The initial design was to use four virtual machines hosted on RHEL 7 operating system using the KVM libvirt framework. As the design evolved the number of Virtual Machines increased to eight, one of which was provided by an OEM vendor. Development work started in May 2015 as of June 2017 the following functionality was supported. Git was used as the source code management system each Virtual Machine was stored in a separate git repository so that the developers were able to develop and build a virtual machine with a minimum of resources. The git submodule feature is used by an Appliance wrapper git repository to bring in the virtual machines needed for the specific appliance variant. All components are buildable at the Unix command line. Bash scripting, Makefiles and python scripts were used in order to keep the required tool set minimal. The bash scripts supported parameter files and command line options. The bash indirect variable addressing feature is used to provide a straight forward implementation that supports default values, parameter file, command line precedence overrides for all scripts. The master build scripts are wrapped using Jenkins to provide build automation. JIRA is queried using the REST API with python scripts to obtain Sprint information and list of JIRA’s that are resolved for each build. Each build is automatically tagged. Automation scripts exist to remove tags and branches automatically when they no longer needed. The VM’s were built on RHEL/CentOS 5, RHEL 6 and RHEL 7 virtual machines. The make files leveraged Python scripts to generate ISO or virtual machine images via Jenkins jobs in order to create Virtual Machines automatically. The Linux libguestfs tool set is used to open existing base VM disk images and to provision them using yum’s installroot feature to reduce VM provisioning time during the build process. Both Production/Public and Private/Developer builds were supported via Jenkins. Installation is supported by DVD and USB media or PXESERVER, legacy BIOS and UEFI boot is supported using DVD and PXESERVER. Once a build is complete the necessary build artifacts were pushed to multiple PXE servers. The PXE servers have the capability to automatically update the PXEBOOT menus (GRUB 2 and syslinux formats) with the list of available builds on each PXE server. The PXEBOOT update scripts use the Jenkins REST API to automatically update the available builds in associated Jenkins job drop down menu lists. Using Jenkins the developers and QA/PV/SV staff were able to schedule specific builds to be installed on specific machines and run the phase 1 configuration. The product used HP servers, using the HP iLO REST API only the iLO IP address, admin account id and password are required, to determine the MAC address of the Ethernet port to be used for PXE installation. Once the MAC address is known, a custom PXEBOOT menu can be programmatically generated to automatically install a specific build on a specific server. Using the HP iLO Virtual Serial Console (VSP), the installation progress can be monitored for completion. Once the installation has completed successfully python pexpect and HP iLO VSP were used to orchestrate initial system configuration using the same commands as a user would entire. The build and deployment requirements came from the R&D and QA teams I was responsible for 100% of the design and 76% of the implementation. A contractor under my direct supervision developed the remaining 25% of the functionality. The installation of software using RHEL Kickstart technology was designed and implemented. Operation System in place OS upgrades is supported by both media and PXEBOOT installation. All packages developed by R&D teams for the product are installed via RPM’s. Two hardware platforms are supported by Surge, I developed a python script that would examine the /sys file system to determine available resources (CPU, CPU Speed, Memory ...etc) and then used a database stored in /etc to determine the required settings for VM’s (memory, number of CPU’s …etc). The Libvirt python API was used to update the VM’s configuration settings during installation. The configuration script is also run at system shutdown and startup via systemd to insure the VM’s are configured correctly for hardware environment. Openvswitch was used as part of the hypervisor platform. Startup and shutdown scripts were developed to install custom flows between virtual machines and devices as required.
The Avaya Identity Engines Ignition Server is an access control solution.Functioned as the primary platform engineer and DevOps engineer.IDEngines was primarily implemented in C++ and C.
Functioned as the R&D Effectiveness Prime for the NSNA/Ignition Server, NSNA/SNAS and Nortel VPN Gateway products. Nortel Identity Engines Ignition Server product. The first Nortel release of Identity Engines was 6.0. Responsible for segregating two products out of a common CVS stream that were sold by Nortel to Radware (http://www.radware.com) the goal was provide a fully functional source code repository, product build and software development environment which provided the purchased products but did not provide the products or versions of products that were not purchased by Radware. The challenges lay in the fact the products are branches from a single core parent and removal of unused components was never done over the multi-year multiple releases of the products. Design Prime for the NSNA 3.0 release for the Nortel Secure Network Access Switch. This release was canceled when IDEngines was purchased by Nortel. Responsible for controller side modifications for release 1.6.1.2 of NSNA controller. The modifications provided a generic system (machine) and user logon capability. The majority of the software development is done in the Erlang programming language (http://www.erlang.org)
The SafeVelocity product is an enhanced FTP replacement that was implemented from the ground up using C/C++ on Unix, Linux to take advantage of current operating systems capabilities of the late 1990's. The code base takes advantage of POSIX threads on HP-UX, Solaris, AIX and Tru64 UNIX platforms and was implemented in Java taking advantage of Java RMI on Windows NT and Linux. The SafeVelocity product provides automatic available bandwidth sensing with automatic smart compression on/off features both before and during transmission of files to minimize system load, bandwidth utilization and total transfer time. Java Native Interface (JNI) (C++) was utilized on Windows NT and Unix to provide access to operating system and product functionality that was not available at the time in the Java SDK. Developed Win32 services on NT/2000, one service embeds a Java Virtual Machine. Also developed a Win32 system control panel which interacts with the service using Win32 mail slots.
Developed an Apache 2.0 module which enables Apache to serve as a license server. The currently supported platforms are Solaris (SPARC and x86), HP-UX 11.00, Linux and FreeBSD.The SafeVelocity product is still offered for sale: https://solution-soft.com/products/safevelocity.
The NASA Internet project had a staff of over 20 individuals performing network connectivity requirements analysis and management, network design engineering and software development.
Project manager for the Single Source Processor Signals Intelligence (SIGINT) (SSP-S) program. The SSP-S is implemented as a distributed application system. Technical responsibilities included the communications front-end processor (FEP). On-site representative for:
During the period 1981 until 1988 supported the Communications Support Processor (CSP) in the European Theater responsible for:
Software Developer on the following projects:
Google Cloud Cloud OnBoard February 13, 2018
Spunk Fundamentals 1
I have authored the following classes that are currently available on Udemy: