Palo Alto Networks Ignite 2017

I had the opportunity to attend Palo Alto Networks Ignite this year. Around 4000 people attended Ignite 2017, ranging from small businesses to large Fortune 500 companies.

While the conference had something for everyone, the reason I attended was to focus on design and product offerings surrounding Amazon Web Services and Microsoft Azure.

Palo Alto Networks announced several releases during their Ignite 2017 Keynote including

Among the numerous sessions and networking opportunities, Palo Alto Networks offered was was called “Hands on Workshops”. These were essentially how the products fit into the ecosystem. I attended both the AWS and Azure workshops but walked away feeling less then enthused simply because the content was focused around what products can do in the cloud and not necessarily how to deploy their products in each environment as vendor best practices.

Overall I walked away from this conference wishing for more how-to and architecture discussions but I did walk away knowing more then I did before. For those of you just starting with Palo Alto Networks, check it out!

Transitioning Employers and Maintaining Relationships

I recently had the opportunity to transition my role from a network and server analyst role to that of a cyber security engineer. Throughout the exodus process, I chose to take several tactics in order to not burn bridges that I thought I would share.

1. Know that you are ready for a new challenge before it is too late.

At some point in the network and server analyst role, I realized that I wasn’t being challenged. While I was extremely busy and often times overwhelmed with the amount of work, the lack of structure and management support to provide the necessary resources led me to lose interest in the work that I was doing. I would come in and grind out the 8 hours and leave. The worst part was when I got home, my personal projects involving the same skillsets seemed also were not interesting to me. This was a wake up call. I began making a concious effort to network with friends and collegues in the field and found out what people were doing (and what sounded interesting to me). I realized that my employer of course did not have my best interests in mind, they were the employer. Once coming to this understanding, I began searching for challenges elsewhere and eventually led me to where I am today.

TLDR – if you feel like you need a change of scenery or you don’t feel challenged, it is probably time to consider other options to express creativity in your career.

2. Be upfront with your employer (if possible)

When I was discussing my transition with my friends, several of them didn’t understand why I was being so truthful with my employer for leaving. I am of the belief that if you want to maintain relationships with your previous coworkers and management (you never know when you may need to reach out to them), the exodus process could be a learning experience for both parties depending if both are interested in learning. For me, it was more about leaving on a professional note, knowing that I have directly told them what was wrong, what they did wrong, what they didn’t do to fix it and how it just wasn’t a great fit for me anymore. It worked out for me.

I don’t recommend this for those who feel they can’t communicate in a professional way.

3. Create and provide documentation for active and existing projects

When leaving, I didn’t want to leave my team in a rough situation so I spent my last month wrapping up and documenting as much as I could. Coworkers asked me for things I never thought to document, and I obliged. Passwords, relevent emails, introducing vendors and clients to coworkers to help make the transition period easier are all fair game. For me, I also maintain contact with my coworkers if they need anything that I forgot to get out of my head before I walked out.

4. Provide reasonable notice for the projects you are involved in.

If you are in the middle of a deep project, try an wait for it to be at a good point to transition. If there is no hope on the horizon for that, I discussed with management what their preference would be knowing that I was somewhat flexible. This isn’t for everyone but for me it allowed them some time to figure out what they were going to do with a staffing gap while they still had access to me daily. I gave a month informal notice to my coworkers and managers and 2 weeks formal notice to HR. Anything less (with exceptions) could be perceived as hostile and burn bridges.

5. Provide a professional resignation letter (explain your motivation for moving on).

Self explanatory. Do not say things you will regret.

Social Engineering Advanced Practical Engineering Training Thoughts

The purpose of this blog post is to share the experience I had while taking Social-Engineer Inc.’s Advanced Practical Social Engineering Training course in Orlando a couple months ago.

While I am not a professional social engineer or penetration tester, I have been gaining interest in the skills associated with both of these professions. Social engineering is not just for those who are trying to illicit information for malicious purposes, but can be used to influence people as well. The purpose of this class for me was to gain a deeper insight in how to read people, body language, emotions and learn how to adapt my communication so it is better received by the party on the other end of the conversation.

Learning how to communicate with people may come easy to some people but after this class I gained a much deeper insight into how I come off to other people in various situations. For me specifically, under stress, I tend to be dominant and can come off as cold and harsh (which I already knew). What I didn’t understand was how to adapt my communication style to the people I was in the room with when I was stressed so I didn’t come off cold and harsh.

Social Engineer Inc.’s iterative method for introducing psychology concepts into the class while interjecting real life examples as to better understand the concepts being addressed really hit home for me. One main focus for this course was that we only wanted to focus on influencing our target’s to give us information rather then using manipulation. We were told we always wanted to leave our target’s feeling better for having met us.

Upon returning home from Orlando, I immediately started putting into practice some of the tools and techniques that I learned both from the Social Engineer Inc. team as well as my classmates. I found that I was being received in a much more positive manner while still achieving the direction I wanted without all of the fuss. We all know that everyone is a different person and everyone internalizes things in a different way but what we don’t always think of is how to adapt ourselves to various situation (even if they are out of our comfort zone) in order to illicit information or stretch our skills.

The class itself was held in a classroom during the day but the more interesting part was the homework. Being able to put your skills to use in social engineering engagements with a team was an experience that I didn’t think I would value as much as I do.

For anyone interested in learning social engineering concepts, understanding how to adapt to various communication profiles or even just sharpen your skills with knowledgeable and interactive instructors, I highly recommend this experience. For those of you that are interested in social engineering concepts but aren’t quite sure about training, check out .

Link to the training:

Security Awareness in K-12 School Districts


As most of us know, security awareness is the knowledge and understanding members of an organization possess to protect both physical and information based assets. Over the past several years, security training has become more prevalent in all industries as organizations begin to realize that there is value in training individuals to be the first line of defense against compromise. I have had the opportunity to work closely with several K-12 public school districts to support their technology infrastructure and have noticed a gap in the level of concern that is present about securing information assets.

The overarching purpose of this project was to gather data to to support an initiative of security awareness training in K-12 programs as well as start the security awareness conversation in the K-12 education industry.

Project Objectives

  • Develop assessment methodology focusing on
  • security, access control and phishing.
  • Develop education materials that can be distributed to K-12 educators.
  • Gather data to show that security through education applies to K-12 education.
  • Improve the security awareness knowledge of K-12 educators in the selected district.
  • Provide data to a sponsor school district to support new security initiatives.


In order to obtain results in a repeatable way, several various research methods were utilized in obtaining data. It should be noted that not all of the materials created were in direct relation to the data gathered throughout this project but rather just focusing on three specific areas of interest. These include:

  • Surveys sent to staff
  • Developed assessments to log physical security offenses
  • Phishing Emails


In order to establish a baseline level of knowledge for the participants in the research in this identified K-12 district, two surveys were used both delivered through anonymous Google Forms to the staff that were going to be assessed.

Due to time constraints, utilizing a survey tool that automatically aggregated the results and created a final report was determined to be necessary. While there are many various online survey tools, Google forms was chosen as the delivery method since the receiving audience of the form was familiar with this form of survey delivery. The participating school district suggested using this method as the participants would be more comfortable knowing it was coming from the district as opposed to an outside source.

The initial survey questions were a subset of those found in a security awareness survey published by SANS through their Securing The Human initiative.

The final survey questions were a similar subset of those found in the initial survey with some additional open ended questions.

Physical Security Assessment

There was only one building assessed during the data gathering phase of this project which was the participating school district’s high school. A total of 26 rooms were assessed looking for the following offenses:

  • Number of passwords on sticky notes
  • Number of characters found in each password
  • Number of passwords found in plain view
  • Number of passwords found under keyboards
  • Number of computers not locked

Phishing Emails

With the help of Chris Hadnagy and Social-Engineer Inc, two rounds of phishing emails were sent to a total of 107 end users.

The first phishing email is found below:

The second phishing email is found below:

The same page was displayed for both iterations of phishing found below:


Initial Survey

First Physical Security Assessment

Phishing Emails

Phish 1 results:

Phish 2 results:

Second Physical Security Assessment

Final Survey


The surveys revealed an overall improvement in the understanding of materials surrounding security awareness topics including password strength, physical access and access control. The surveys also showed that developing content for security awareness has a positive effect on the understanding and adoption of security practices in K-12 teachers. Additionally, the survey provided metrics as to baseline initial security knowledge and validate it’s methodology in terms of aggregating data from various disciplines.

The physical access assessments showed an overall improvement in the amount of passwords that were found in the assessed classrooms. While the amount of computer workstations that remained unlocked stayed the same, it should be noted that they were in previously different locations then the first physical assessment.

The access control assessments showed an overall decrease in the size of the password entropy for the passwords found between the first and second assessments. This can be attributed to the decreased number of passwords that were found during the second physical assessment.

The phishing assessments showed an overall improvement in the number of click responses that were recorded between the first and second phishing campaigns.

Despite the results that were attained throughout this project, it is difficult to measure a learner’s education. Moody and Sindre (2003) discussed two different types of learning assessments: performance based and perception based. Performance based assessments are based on achievement tests often times measuring the differences between two groups. Perception assessments are surveys that are passed out to end users asking them to identify understanding about an issue relative to what they think they understand. While the information gathered in this project are largely perception based assessments, this was due to the time limitations associated with providing a standardized education and the development of the deliverable course is the ultimate performance based assessment methodology.

Further Research and Recommendations

Based on the acquisition of data that was performed in this project, being able to replicate the results is extremely important. While gaining access to K-12 school districts may be difficult, it is imperative to continue the research in the field of security awareness when targeting K-12 educators. Though financial sources are often limited in K-12 institutions, the amount of risk that is mitigated through providing training to staff members, greatly outweighs the potential of data disclosure to undesirable sources.

Further phishing email campaigns should be conducted to validate the findings of this project and further support the hypothesis that “security through education does have a statistical impact on K-12 educators”. While phishing email campaigns like the ones conducted in this project have a monetary value associated with them, there are free alternatives such as the Social Engineer Toolkit (SET).

The involvement of an industry expert would also be helpful to help target specific individuals in spear phishing campaigns. One component that was not done throughout this project was target profiling for individuals and in the real world, it is likely that end users could be victims of specifically targeted attacks which they are not educated about.

Further physical access controls should be defined for both assessment purposes and for protection of student data. The criteria for data research provided in this project were extremely basic and school districts should not only measure based on the criteria used in this project. It is recommended that they consult with a physical security expert.

Further password policies should be developed for all school districts, especially for educators in the industry. Microsoft’s minimum requirements when password policies are enabled are eight characters consisting of three of four character classes: upper case letters, lower case letters, numbers and symbols. While these are the minimal requirements, school districts should be looking towards password requirements above twelve characters with a password reset policy at the most of ninety days. While this would cause potential technical constraints and issues, the increased security posture of the organization would benefit overall.

For the specific participating school district, it was recommended that further security awareness training be conducted on a regular basis in conjunction with annual assessments to start improving their security posture overall between all of the schools. Since only a portion of the entire school district was measured, there are still a lot of end users that need to be educated.

Incorporating security awareness components into the training educators receive is only the first step of the potential of this training program with the possibility to lead into student education. While students in K-12 are already being taught a wide variety of materials, by the time they graduate in current culture, they are exposed to much different scenarios then the course material was created for years ago. The program that should be developed for cyber security / security awareness education should be malleable enough to be updated annually to focus on new threats that students would face along with modernizing the delivery mechanism. K-12 students are much more used to using mobile devices rather than traditional desktop computers and the course content should be directed towards that delivery mechanism.


This project would not have been possible without the sponsorship of Nick Griswold and the guidance of Chris Hadnagy. The infosec community is growing every day and it is an awesome to be able to work with industry experts directly. Thanks again to both of you.


Moody, D & Sindre, G. (2003). Evaluating the Effectiveness of Learning Interventions: An Information Systems Case Study. Retrieved from

The Importance of Your Homelab

When breaking into any field in technology, getting hands on experience is one of the most important techniques that can really change the way you approach complex problems. Designing a homelab that is right for you can be somewhat challenging but there are essentially three paths to take depending on your desire and technology interest.

  1. A free open source hypervisor such as Oracle Virtual Box.
    Virtual box is great for beginners – experts who want an easy way to get going and spin up virtual machines to do various tasks. Oracle Virtual Box was created to cater to the end user who wants to tinker and seems to be a direct competitor to VMware Workstation. That being said, all of VMware’s features from its’ workstation product are not available in Virtual Box. If you are wanting to get started with some VMs and mess around with Linux, this is the way to go.

  2. A server based hypervisor.
    While VMware ESXi for personal use does not come with all of the shiney features that are available in the enterprise editions, who needs them for home? A server based hypervisor will allow you to be more mobile and build some complex networks. Looking for experience with PFSense and network diagnostic tools in an always-on environment? Try taking a look at ESXi to get started. It has much of the same feel as when you would use VMware products in an enterprise setting and has the same feature sets as that of Virtual Box. There are plenty of guides on the internet as to how to set up whiteboxes for this type of thing.

  3. A full out development lab at home.
    While this is a more rare approach, it often can provide the most flexability when trying to acquire knowledge. This approach is also the most costly as it does require you to purchase equipment up front including switches, servers and maybe a SAN/NAS of some sort. Working in conjunction with a server hypervisor that can do clustering like Xen Server, you will be able to really start developing complex application networks at home while controlling the network infrastructure things are running on.

What is my setup?
I am currently some where in between options 1 and 2. Due to the nature of my desktop machine at home, I have the opportunity to have a large amount of memory in the system which allows me to build out the networks I need to do my testing in a sand box environment. This would be great for python and powershell script development and lets you have your local sandbox active directory environment to mess with so you don’t run things on your production boxes.

Need help in getting started? Let me know!

Greyhole as a Raid Alternative

Over the last several years, there have been some great discussions on how to set up a home media server with some form of drive pooling whether that be through software distributions like FreeNAS or through using tradition RAID. I am going to review two similar yet different types of drive pooling that can make the creation of your home media server easy while allowing you to scale at a reasonable rate. This functions much like WHS Drivepooling did using rsync and samba to keep track of your files.


After Microsoft decided to remove it’s form of drive pooling from Windows Home Server, users began looking for alternatives. Greyhole can be installed on most linux servers but for the purpose of this article, we will be demoing a setup of Greyhole on Ubuntu 12.04 64bit. (There are other ways to set this up – Greyhole)

The first thing that needs to be done is have your install of Ubuntu complete. I recommend any LTS release that is still under current support.

You should see something like the following on your default install.

As you can see from the above picture, only one disk (sda) is mounted and has partitions. Create partitions using fdisk.

Also be sure to create filesystems on each new partition using

sudo mkfs.ext3 /dev/sdX#

Get the UUID of each partition.

Add each of these to your fstab entries and mount the disks.

On Ubuntu, Debian and any other distribution using APT, you can use the Greyhole APT repository to install and keep Greyhole up to date.
Add the Greyhole APT repository to your APT config, import the GPG public key, then use apt-get to install or update Greyhole (the same apt-get command, install, will install and update Greyhole later):

sudo sh -c 'echo "deb stable main" > /etc/apt/sources.list.d/greyhole.list'
curl -s | sudo apt-key add -
sudo apt-get update
sudo apt-get install greyhole

After greyhole has finished installing, we have to set up samba. Greyhole works be watchings the samba logs for activity and performs things based on the config file. For more information on how Greyhole works, click here.

Direct from the Usage file

Edit /etc/samba/smb.conf
Change or add the following values in the [global] section:

    unix extensions = no
    wide links = yes

Configure your shares. Example share definition (taken from the USAGE file):

        path = /path/to/share
        create mask = 0770
        directory mask = 0770
        read only = no
        available = yes
        browseable = yes
        writable = yes
        guest ok = no
        printable = no
        dfree command = /usr/bin/greyhole-dfree
        vfs objects = greyhole

Restart the samba service.

# Make sure your MySQL server service (mysqld) is running, and runs on boot.
Fedora: service mysqld start; chkconfig mysqld on
Ubuntu (< 10): /etc/init.d/mysqld start; update-rc.d mysqld defaults
Ubuntu (10+): start mysql
Debian: service mysql start

# Remove the -p parameter if your MySQL root user doesn't require a password for local connections.
mysql -u root -p -e "create database greyhole; grant all on greyhole.* to greyhole_user@localhost identified by '89y63jdwe';"
mysql -u greyhole_user -p89y63jdwe greyhole < /usr/share/greyhole/schema-mysql.sql

Customize the Greyhole configuration file, /etc/greyhole.conf, as needed.
Important: you need to either use the date.timezone setting in your php.ini, or specify your timezone in greyhole.conf, using the timezone config option.

You will need to specify the storage_pool_drives with minimal free space:

#       storage_pool_drive = /mnt/hdd0/gh, min_free: 10gb
#       storage_pool_drive = /mnt/hdd1/gh, min_free: 10gb
#       storage_pool_drive = /mnt/hdd2/gh, min_free: 10gb
#       storage_pool_drive = /mnt/hdd3/gh, min_free: 10gb

Start the Greyhole service. Errors will appear in the greyhole log. You can also check this by

greyhole --logs

If everything was configured correctly, you should be able to run “greyhole -s” and have something like this appear.

In order to test the samba log functionality, mount the samba shares locally.

If Samba is configured correctly, you should have a something like this when doing a “df -h”.

The Practice of Network Security Monitoring

I have been finally working through The Practice of Network Security Monitoring by Richard Bejtlich and will review some of the things I have learned thus far. This NSM book was just want I needed to get a grasp on the basic (and some more intermediate) steps in implementation of NSM in an organization. One benefit to me was the fact I had a network without NSM to play with and I would strongly recommend that (non-business critical to start with!).

1 ) Understanding how your network is set up will really help your understanding of the data aggregated during the assessments. Furthermore, network diagrams is something you should already have for your organization and if not is something that should be worked on and updated at a regular basis. This holds true for IP address assignments as well. You should know what are your DHCP scopes and what are static addresses. This may seem normal to some of you, but in several organizations I have been involved with, getting this standardized is a pain.

The test network I used was a not a business critical network at my employer (after obtaining permissions from management) and at the start I was unaware of the network and traffic flows associated with this specific section. Beware, if you do not have network diagrams and show them to your manager, he may ask you to start working on diagrams for all of them.

2 ) Security Onion is an awesome and easy starting point. I chose to install security onion on my development lab on VMware ESXi (which has access to the network I chose to start monitoring). The setup was very easy. That being said, I did choose a standalone system for the sake of learning that only monitored the one subnet. Security Onion does offer a distributed deployment option which I have not had an experience with thus far.

3 ) If you have experience with wireshark and winpcap, tcpdump is easy to pick up. If you have an understanding of how networking packets are assembled, you should be all set. There are several other tools explained in the book that are all basic components of a network administrators toolsbox (or should be).


I am only about halfway through the book as of now but would recommend it for anyone looking to get a grasp of NSM. Understand that this is an introduction to the field and more work will be required.

Python Refresher

When you have the opportunity to take a break from being in the office, it is almost necessary to stay up to date (if not catch up) on events that have have been occurring in the information technology / security fields.

I have had the opportunity to take a holiday and have been working through my backed up books, RSS feeds and forums. One of these projects or to-do items is to finish a course on . When I was an undergraduate, python was taught as a replacement for the intro level Java course, which I think was a great option. CodeSchool offers a great refresher on the subject as I am currently working through the python track.

I would strongly recommend codeschool to those that need a quick interactive refresher or to those looking to pick up a new language.

Dell Equallogic MPIO Issues

Over the last few years, I have been exposed to the storage architecture surrounding Dell Equallogic. Equallogic’s market seems to be the mid-range storage tier, providing a cost effective way for small – mid size organizations to start their own SAN (storage area network) deployment. Originially when Equallogic was released to the market, they were their own company but eventually purchased by dell and since we have seen a change, some for the better some for the worst.

Example Infrastructure Design:

  1. HP DL585 G7 Cluster (3 Hosts in the Cluster)
    • Management and Vmotion is on a unique designated VLAN
    • VSphere 5.1 Update 1 (Build 1117900)
    • Dell Mem 1.1.2 Installed
  2. Equallogic PS5000 units in a pool
    • ISCSI traffic is on a separate VLAN from VSphere management
    • No Spanning-tree enabled, flow control enabled, Jumbo Frames enabled
  3. Network Infrstructure
    • A pair Cisco Nexus 2248’s connected to a pair of Cisco Nexus 5000’s

One specific issue we seem to be experiencing is an ongoing multi-pathing related error in our vshere event logs. An example is below:

VSphere Event Log Message 1

We were seeing these events several times every 5 minutes and then it would return to it’s natural state. An example is below:

VSphere Event Log Message 2

Notice the time difference of approximately 1 second.

What you will see in the equallogic event logs will look similar to the following.' from initiator ',' was closed. | Logout request was received from the initiator.' from initiator ',' successful, using Jumbo Frame length.


After working with Dell, a level two came up with the following solution that has solved the issue for us.

Disable LRO in ESX v4/v5 Note:

After upgrading the ESXi host to 4.1 or 5.x and upgrading VMs to VMwareTools to 4.1 or 5.x, you may experience slow TCP performance on VMs running on the 4.1 or 5.x ESXi host. You can address this situation by disabling Large Receive Offload (LRO) on the ESXi host.

To disable LRO, follow this procedure:

  1. Log into the ESXi host or its vCenter with vSphere Client.
  2. Select the host > Configuration > Software:Advanced Settings.
  3. Select Net and scroll down slightly more than half way.
  4. Set the following parameters from 1 to 0:
    • Net.VmxnetSwLROSL
    • Net.Vmxnet3SwLRO
    • Net.Vmxnet3HwLRO
    • Net.Vmxnet2SwLRO
    • Net.Vmxnet2HwLRO

Reboot the ESXi host to activate these changes.

Within VMware, the following command will query the current LRO value.

  • esxcfg-advcfg -g /Net/TcpipDefLROEnabled

To set the LRO value to zero (disabled):

  • esxcfg-advcfg -s 0 /Net/TcpipDefLROEnabled

A server reboot is required.

Your guest VMs should now have normal TCP networking performance.


As evident on both sides, something is triggering an event that is causing the software ISCSI initiator to logout of the volume but then immediately reconnect. There seems to be some disconnect between the Dell Mem Module and what VSphere is trying to do.