Review – Asus G14 Zephyrus Model:GA401IV-BR9N6

While most of my finds are bargains, refurbs etc., this laptop is a bit different- I paid full price. In general, I don’t like paying full price for a system but in this case, the reviews I saw were so overwhelmingly good, I bought it. Here is the ad from Best Buy. The system can be found here. LINK

Most of the specs are listed in the ad above, but listing them out:

AMD Ryzen 9 4900HS
16GB Micron Memory DDR4-3200Mhz CAS 22 (8GB soldered onboard, 8GB in a slot expandable to 40GB)
NVIDIA RTX2060 Max-Q 6GB DDR6 Video Card (running x8 PCIe)
1TB Intel 660p SSD
Realtek Audio Card
Intel Bluetooth
Intel Wifi 6 AX200 Card
14-inch CEC LM140LF-1F01 1080p/120Hz Screen
79Wh Battery (fully charged)

Included in the box was very few items. A little bit of paperwork, the laptop, AC brick (which is decent sized considering its 180w)

When I received the system, I automatically checked for newer drivers and there was a newer BIOS out. I want to say this is one of the slickest BIOS updates I have done, as far as ease of use. I loaded the firmware update from inside Window’s Device Manager by just clicking on the Firmware and telling it to automatically update. It grabbed the new file from Windows Update and then prompted me for a reboot. It proceeded to load the BIOS after reboot and that was that. Truly we are finally living in the future! Kudos Asus for that.

I started off doing most of my benchmarks and then realized that I hadn’t enabled the high-end profile on the ASUS program. I was set on “High Performance” profile when I needed to be on “Turbo”, after which the machine performed about 200 points better in most of the benchmarks.

Before I get ahead of myself – unboxing. The box was in typical ROG fashion and looked good (Yes, I know my desk needs cleaning):

The system itself feels solid and well put together. They keyboard had great feel to it and has extremely (to my fingers) great feedback. The touchpad worked flawlessly and felt solid no matter what area you pressed on to click. The keyboard’s backlighting IS very uneven. It works but it doesn’t look very gaming like. Single backlighting color only, white. The bezels are nice and small and I agree with most of the other reviewers out there – it would have been nice for Asus to include a webcam. It can be done on small bezels as proven by the XPS line. Overall, I’m satisfied with fit and finish of the machine.

The size is decent as well. I like the 14″ size. Definitely feels better than most 15.6″ laptops. It still has some size to it, (considering I’ve been using the XPS 13 it is bound to feel that way) but at 3.5lbs, it stays easily carriable. Here you can see it next to the XPS 13.


Processor: The new Renoir 4000 series finally feels like a high end finished processor. The 3k series such as the 3750 wasn’t bad and it did an admirable job, but this 8-core processor just adds another level of speed. Details on the processor:

Ryzen 9 4900HS
Base 3.0GHz- 4.4GHZ Turbo (according to HWINFO)
8 Cores / 16 Threads
12MB cache (512KB L1, 4.0MB L2, 8.0MB L3)
7nm with TDP 35W (regular ‘H’ processors add 10W)
Vega8 IGU

RAM: As mentioned above, due to the thin and light aspect of this system, Asus opted to solder 8GB of RAM onboard and have one slot available for upgrade purposes. This makes for some interesting RAM combos. You can change the included 8GB RAM for a 16GB or a 32GB stick for a total of 24GB or 40GB. The packaged Micron RAM is 3200MHz and has CAS timings of 22/22/22/52.

NVMe Intel 660p SSD: This is the second time I have gotten an Intel 660p in an Asus Laptop so I can’t say I’m surprised. While I would have preferred a TLC drive, for most people the QLC SSD will work fine.

Video Card: NVIDIA RTX 2060 Max-Q. While this is a bit slower than the normal Mobile 2060, this was done for battery life and heat I’m sure. There is not a whole lot of room in the 14″ frame to put a lot of heat pipes and fans hence the decision. I definitely would prefer the full 2060 but this card didn’t do too bad in testing, and I’m willing to accept some tradeoffs for the price and portability.

Video Panel: CEC LM140LF-1F01. This is one of the areas I wish they had spent a little bit more of the budget. It isn’t bad, but it isn’t great either. It’s 120Hz but from what I’ve read says that it will only run at 120Hz when playing games or other apps that really need it and will normally default to 60Hz. I’d prefer 120Hz all the time, but understand they did it for battery life. There is more out there on this panel HERE.

So on to the benchmarks!


One of the tests I ran this time was transcoding. I took a 5 min 4k Video and transcoded it to Std quality 1080p. The transcode took almost exactly 3 min long.

In the benchmarks below, I’m not sure exactly why it didn’t recognize the RTX 2060 Max-Q. It was definitely using it though and the NVIDIA driver was at the latest version. Here is my 3DMark results. First, I ran Time Spy 1.0:

The New Asus did 5962 vs the older Ryzen 7 3750s 4471. Nice upgrade.

Next benchmark up was Spy Diver:

New Asus G14 turned in 33,001 vs the older Ryzen’s 22,004. A nice upgrade. This still trails a bit behind my Alienware at 45,482. But according the graph it is right in line with Intel’s gaming laptops with the same video card.

Next up was Fire Strike.

Final tally was 14,102. I didn’t run this test unfortunately on the previous Zephyrus, so I can’t get a good comparison.

PC Mark

This was surprised me as this wasn’t very far off from my Alienware desktop. The Alienware received a 6004, and the previous gen Zephyrus 4288.

I then ran Cinebench R20.

This was a nice jump over both previous gen and even my Alienware desktop. This was a big surprise. The Alienware got a Single CPU score of 454, and Multi of 3031. The previous gen Zephyrus with the Ryzen 7 3750 received a score of 361 for Single Core and 1745 for Multicore. This is a HUGE bump over the previous Ryzen. I wasn’t expecting the increase over the i7-8700 desktop processor though.

I next ran a User Benchmark and Tomb Raider’s Benchmark on it.

I think it was completely respectable for a Laptop vid card. Finally, I did a disk performance test on the Intel 660p. I didn’t go much beyond that as it has been written up about plenty.

Not bad overall. I know it will slow down with longer transfers but I’m fine with that. The last thing I really wanted to highlight was the Wifi 6 card bandwidth. I copied a 50GB movie file from the laptop over my network. I am using a TP-Link Archer AX50 router. My distance to the router was about 3 feet so, hard to get much better than that. It would fluctuate on connection speed between 1.0Gbp and 1.1. The transfer averaged about 70 MB/s which is pretty good in my opinion considering Wifi overhead and occasional frame loss / retries.


I’m enjoying this machine and now I need to finish customizing it to properly use it and finish my assessment (Battery Life still needs to be tested). Based on the previous iteration with the Ryzen 7 3750 and even against current Intel processors, this machine will compete. My opinion, this machine is highly recommended and will be occupying a space in my stable of PCs. I look forward to seeing AMD continue to put out highly competitive and cost effective products.

Dell Outlet – Alienware Aurora R7 bargain find

While I was benchmarking the ASUS ROG, I was on outlet and looking at their deals for Memorial Day. Lo and behold I found one! The system I found was the following:

Alienware Aurora R7
i7-8700 (6c/12t)
16 GB RAM (2×8 Hynix 2666Mhz)
256 GB (PC401 Hynix NVMe 1.2)
2 TB (Toshiba 7200 RPM)
Dell Nvidia 1080Ti 11 GB DDR6 (says it’s an MSI)
Intel Z370 chipset
850W 80plus PS

And I got all this for the paltry sum of …. $985. Add tax etc. and I was still under $1100 or less than the ROG. Yes, I am aware that it is not a valid comparison since it is a laptop vs desktop. However, I did decide to benchmark it as well and compare the two. To be fair, the ASUS was very snappy and impressed me in a lot of ways. From the 120Hz screen to just general responsiveness. It was very crisp and I loved it. The lure of a 1080Ti for under a grand was too tempting though and so the ROG will be going back to Best Buy and the Alienware will be taking its place as the main gaming machine. The XPS 13 9380 maintains its spot without needing to fight for remote work machine. The XPS 13 is awesome, but that’s a different post.

To begin with we have the unboxing. The machine was packaged well with lots of padding to keep the machine secure during shipping. It came with an Alienware branded keyboard and mouse. Both were decent but nothing special. The keyboard did look like it was lighted but turned out that it was not. It did have a good feel to the key actions allowing for very fast typing and satisfying soft click action.

The machine itself has plenty of ports, with both USB-C and A and enough ports for 7.1 sound and all the other usual ports expected from an enthusiast machine. Inside is pretty cramped with the power supply actually needing to swing out in order to get to the motherboard and components below. There is room for one NVMe drive and 4 hard drives, although two of them will need to be 2.5″ or smaller. All the other components inside are dwarfed by the 1080 Ti card. Once the machine was turned on, I couldn’t help but notice I heard nothing. This machine’s processor is liquid cooled, the fans are quiet, and add in the sound deadening of the case and I have to almost put my head right next to it to hear it operating normally. This may change when I start gaming on it, but it is nice to have a quiet machine in my home office – since I have 7 rack mount servers on the other side.

As for the components, most of them are more generic sourced components versus something you would pick up yourself to put a system together obviously but during my benchmarking they did really well. The 1080 Ti is a generation old now as the 2080 Ti are out but still a very capable card as my benchmarks show. I’ll still go into them as much as I can of course.


Processor: Not much new here. Most people are well aware of Intel’s i7-8700 8th Gen Coffee Lake processor. This one is running 6 cores with Hyperthreading for 12 Logical. It is running at a base frequency of 3.2 Ghz and turbo boosts to 4.6 Ghz. TDP is 65W and a total of 12 MB of Cache. It also has onboard graphics in the manner of UHD 630.

RAM: My system came with 2×8 GB RAM sticks running at 2666 Mhz. The motherboard used allows for up to 64GB at up to 2933Mhz. The newer Aurora R8 allows for faster RAM to be used.

NVME Hynix SSD: This is one of the parts that Dell has been using in a lot of their systems. It came originally in my XPS 13 as well (before I swapped it for a Samsung 970 Pro) Some of the other one I’ve had haven’t fared that great but I was actually surprised at the numbers on this one. You can buy this one from Amazon in bulk packaging for about $50. Add to it that most manufacturers don’t have great numbers for their smaller drives and this pleasantly surprised me.

Video Card: There is a lot of info out there on the 1080 Ti card as well so I won’t go too far into it here other than to say its fast. 3584 Cores, 11Gbps bandwidth and the need for about 250 Watts just by itself. It’s a serious card.

Chipset: Intel Z370 wasn’t the top end chipset but it offered overclocking features and solid support for things like RAID, Optane and 1×16, or 2×8 PCI-e slots, depending on the design. This motherboard in the Alienware can support two large graphics cards and even has the power connectors dangling to tempt you every time you open the case. Unfortunately, since I don’t have the i7-8700K processor the overclocking features remain locked to me.

Software: The normal complement of software is installed. General Windows 10 Home install, with all the extra trimmings that nobody ever wants. There is also the install of McAffee on there as well. Otherwise its not too heavy with junk software on the system just some Alienware and Dell utilities to keep the machine’s drivers and firmware up to date along with the control software for case lighting. A few uninstalls and I’m ready to start with my own installs of both the benchmark software and also latest drivers (and Microsoft Windows updates).


Compared to the ASUS ROG, this numbers averaged about 2x as good. I started off with 3DMark’s Time Spy and finished with a score of 8846 compared with 4471 of the ROG

Just because I wanted another comparison I ran Sky Diver as well, even though the test isn’t specifically meant for higher-end gaming machines. I ended up roughly about 2x the performance again with 45,482 vs the ASUS’s 22,004.

The next test, got to stretch the legs, so to speak of the 1080 card. The test was Fire Strike and was meant for gaming machines. I ended up with a score of 20,811 on it.

The final 3DMark test I ran was the Time Spy 4k resolution test.

I then moved on to the PC Mark 10 benchmark where once again, the desktop made its presence known with a score of 6004 to the 4288 from the ASUS.

I also ran Cinebench R20 on it and it scored pretty well although being a bit lower clocked than say the i7-7700. I have no doubt that if it was just a bit higher clocked I wouldn’t have any issue with kicking the i7-7700 to the curb.

The Multi-Core and Single Core was much higher than the ASUS at 1745 and 361 for the ASUS. Finally I also wanted to run a benchmark on the PC401 Hynix NVMe drive.

Not bad for an OEM part but obviously is still quite a bit behind the Samsung 970 Pro.

Overall, I believe this system to be pretty awesome and should last me for a while. It came with a year onsite warranty, so that gives me plenty of time to put together the new Ryzen build that I want to put together over the space of the next 6 months or so. A little different purpose in mind for the Ryzen so the Alienware definitely will be hanging around my desktop for a while, looking pretty.

New Gaming Rig – ASUS ROG Zephyrus G GA502

I am forever in search of deals when it comes to computer hardware. This is possibly one of the reasons why I have 7 Dell servers sitting in a half-rack in my house (leading to disapproving stares from the wife). When I saw this laptop in a sale on ($1050), I thought, “What a really good deal!” I had just bought an XPS 13 and an XPS 15 and had no real reason I could justify buying it. Fortunately, I was never one to worry about reasons and good sense when it came to good deals on fast computer hardware. This is definitely a weakness.

Normally when making a purchase of this magnitude I try to do a bit more research on it. But I couldn’t find any regular benchmarks done by the normal places I check. Indeed, the only place I could find anything was from a person who had purchased one on another forum and made his own benchmarks. Based on these and the personal need to know more, I decided to buy the laptop. I further told myself that If I did thorough benchmarks and posted them, I might allow myself to keep the machine, provided it performed well enough. (Dangling that carrot)

Going into this purchase I was a bit unsure for two reasons. Intel is currently at their 9th gen i series processors and their i9 are currently sitting at 8-cores / 16-threads. This machine was sporting a newer AMD Ryzen 7 3750H which is a 4-core / 8-thread processor based on 12nm Zen+. I wouldn’t most likely ever purchase an i9 due to the price (no matter how much I would love such a machine). It wouldn’t really be fair, in my own opinion, to compare it to one. It would be much fairer to compare to say, an I7. That being said, the i7’s are still sitting at 6-core /12 threads and for that, would seem to be quite a bit faster. The second reason is we are on the cusp of the new Ryzen 3rd Gen Zen 2 chips coming out and I am really excited what they will be able to do with the 7nm chips. I decided to buy it anyway.

Be aware, there is no bias in this review as I do not work for any computer vendor and I bought this computer with my own money. That being said, I would not be averse to accepting computers for testing purposes (hint, hint Dell, Lenovo, HP, etc..). So enough of background. Let’s get into it.


The machine is packaged well enough. The box is relatively spartan inside. You have a few sheets of paper consisting of the warranty information and other pieces of info. You have the laptop and the power brick. The machine itself is beautifully simple. There are no weird lines, no extravagant lights. Just a small logo on the back that lights red. Overall, I am a big fan of the aesthetics. If I was nit-picking, the material picks up a lot of finger grease though, and there is no webcam. The ports are distributed on both sides, with the back being reserved for getting rid of hot air. The right side has 2 USB-A 3.0 ports and a Kensington lock (and another vent). The left side has a headphone jack, a USB-A and C port, HDMI 2.0, ethernet, and power plug. The USB-C is not Thunderbolt and can’t provide power to the laptop (which isn’t a surprise considering the brick included is a 180w power supply) That said, I think there are plenty of ports for this price point and while I wouldn’t mind another USB-C with Thunderbolt, I don’t really need it considering the laptop should have enough video oomph to do what I want it to do.

The system is relatively lightweight coming in at just over 4.5 lbs. The bezels on the screen have been shaved down, though not to the same extent as the XPS 15. I took a few pictures side by side to compare.

The Hardware

First the processor. The processor is an AMD Ryzen 3750H. The ‘H’ denotes the higher power chips vs the ‘U’ chip line being the lower power, longer lasting chip. The rest of the information (and below graphic) I will pull directly from AMD’s website here.

As you can tell there is built-in Vega graphics. Total of 10 GPU cores, and the processor is a 35W chip, hopefully providing decent battery life.

Next the memory is Samsung DDR4 RAM running at 2666Mhz. In my laptop it was running this on a single slot.

The graphics is a combination of the Vega 10 on-chip graphics and the Nvidia 1660 Ti with Max-Q for better battery life, with 6GB of DDR6 Micron RAM. Its interesting to note that when I was digging into the HW info, I noticed that the Nvidia card was running at only a x8 bus instead of a PCI-E x16. I guess we will see how this plays out later on. There is a way to boost the speed of the Nvidia to a higher clock speed so that may take up some of the slack.

For Hard Drive duties, Asus decided to go with the Intel 660p NVMe M.2 512GB drive. This is the first one I have seen come from a OEM PC maker, though I’ve seen them advertised for a while. This drive is using Quad-Level NAND flash instead of the previously common TLC or V-NAND (2-bit MLC) chips found on the much faster Samsung Pro NVMe cards. Generally speaking, a QLC drive will provide good performance at a much lower price point than the other two options, seeing 2TB drives for under $200 already. Where the drive slows down is mostly in the longer (larger) transfers and long-term reliability. That being said, the drive is still much faster than the normal 5400 or even 7200 RPM rust drives commonly found in laptops.

The system comes with Realtek everything. The Wi-Fi card is a Realtek 8821CE card supporting AC in a 1×1 antennae. The Bluetooth 5.0 card is also Realtek (presumably same card) as is the RJ45 port on the side.

The keyboard is backlit as expected with N key rollover. This means that each key is scanned and picked up independently when pressed – allowing you to press a lot of keys at the same time, (useful for key combos in games for example) and it will flawlessly pick up each stroke. The backlight keyboard color is white and non-RGB. It doesn’t have lighted zones or any of the other higher end capabilities. That being said, I like the white and the key press is solid and feels good when typing on it, with a decent key depth when depressed. Again, to be nitpicky, the keys seem a little offset and that makes it a little harder to get comfortable typing on it and the touchpad, while capable enough, is a bit on the small side compared to other laptops. Again, it is a grease magnet…

The final piece of hardware that I was excited for on this laptop was the 120Hz 1080p non-touch LED panel. This is a vIPS, non-glare (no shiny glass yayyyy!) panel capable of either 60Hz or 120Hz refresh rate. The video card should have no trouble using the better refresh rates on games as long as you aren’t trying to run some crazy settings on it. The increased refresh rate should be a lot easier on the eyes for office tasks as well. So far, the LED panel is pretty smooth in regular Windows 10 transitions and I’m enjoying it immensely. There isn’t a whole lot of info (none actually) for this LED panel I have been able to find. The actual model is Sharp LM156LF-GL and it is a 6-bit IPS panel. I can’t find a driver for it or any other info. The viewing angles are good on it and I don’t see any light bleed on the edges which is nice.

As far as the battery, I can’t find anything in the listed specs other than possible battery life, which we all know is bogus. There will be testing on that. HWinfo lists the manufacturer as being ASUSTek with a listed capacity of 75,887 mWH or approximately a 76WH battery. So not bad, but not quite as hefty as say the 97Wh that is an option with my XPS 15.

The Software

Nothing out of place here. The system comes with Windows 10 (1809) and has minimal bloat – that doesn’t already come with Windows. Asus kept all of their own software relatively minimum, even eschewing McAfee (yayyy) on the system. There are some handy programs that are loaded on there. MyASUS which is a program to connect your machine to ASUS’ website to keep it up to date with drivers and firmware. SonicStudio which is a sound and recording program. Realtek audio console, Game Visual which allows you to change screen settings for different profiles, and finally Armoury Crate. This program allows you to change settings on your computer (speed / fan acoustics / key combos / and more) for different game profiles. You can also connect to it using a mobile app on your phone so you can change it via that while you are doing other things with your computer. Overall, I like the fact that the ROG like my XPS seems to stay pretty clean. I just wish Microsoft would stop trying to force their extra apps on us.


To start with, let me go over the setup. I have installed nothing but driver and BIOS updates and Windows 10 updates. Everything should be the latest as of this posting. No tweaks were made, everything is as it came out of the box. The profiles and overclocking of the GPU mentioned above, was not done. Depending on the results, I may go back to try to tweak some of the settings for the NVMe drive, but I want a baseline first. I want to see how good this $1k machine can actually perform from the factory. The benchmark programs I bought off of Steam (3DMark and PCMark) myself and Cinemark will be thrown in for good measure. I can do benchmarks for anything else if anyone has any requests as I am rather new to this more detailed benchmarking thing. The card, according to ASUS’s website is said to be about as fast as the old 1070 card. So that is what I am looking for as far as numbers. If those ARE the numbers I get back, I will be extremely impressed with this system for a grand.

The first score was on Time Spy. Once again, no tweaks were made on the system yet. I compared it in the graphic to some i7-7700HQ and 1070 Max-Q cards with 8GB of RAM. And my tests came out pretty favorable. The 5197 score was the highest in the DB for that setup.

I then tested it on the Sky Diver 1.0 test and once again got pretty decent results considering the cost and such of this machine. Again, the 29209 score was the highest in the database.

For the PCMARK 10 test I added the i7-8750H just for comparison sake. Once again it showed well considering. It is notable as well that the 2nd result had a Samsung Pro NVMe drive in it as well as 32GB of RAM. (One thing to note, I went back after loading Nvidia’s newest driver and re-ran the PCMARK 10 and got a score of 4345.)

It is harder to compare these to other notebooks in this section because I don’t have a better control over what they have inside them. I am trying to maintain relative hardware though as much as its within my control. Next, I moved on to Cinebench R20. Both Single and Multi-Core tests were ran.

Lags a bit behind the 7th Gen i7 but not bad, considering the clock speed difference. I then tried to tweak the drivers a bit to see if I can get any more performance out of the benchmarks. To do this I selected the ‘Turbo’ profile from ASUS’s Armoury Crate program and re-ran the same tests. Before I even share the results, I could see a marked difference in the smoothness of the video being rendered. The fans were spinning their little hearts out but even still it just sounded like a loud breeze going by, not a turbine spinning up. That being said I’m sure that for some this would be a bit loud and would need to use headphones for gaming. As far as heat on the bottom, it got warm on the bottom but never uncomfortable.

For the third test I loaded Nvidia’s newest driver on the machine. As soon as I did, the machine had trouble switching to the Nvidia card instead of using the Vega. I tried to remedy this by making the Nvidia card the default card and re-ran the tests. That seemed to work. I guess loading the Nvidia driver by itself broke the switching but would need to test this further.

Left column in 3rd Place is no profile and original driver. 1st Place in the middle is Turbo profile and older driver. 2nd place is the new Driver and Turbo Profile. I’m a little confused by the result there. But overall not a tremendous difference a few Mhz overclock gives it.

I ran the Time Spy again to see how that would stack up. I’m seeing the same results repeat themselves – exactly. In third place, the original drivers and no profile. 1st place, original drivers and Turbo profile, and in 2nd, the new drivers and Turbo profile. You can notice though the speed of the 1660 is the same speed as the original. Leading me to believe that perhaps ASUS’s program is not able to overclock it with Nvidia’s driver, yet it still performs better than the original pointing to a better driver itself.

The results bothered me a bit so I went back and reinstalled and rebooted. After I did that, I ran the Sky Diver again. This time I got the results I expected. The new driver with Turbo profile came in first.

I ran some CrystalDisk numbers as well on the Intel 660p Card just to see where we were sitting. The results look pretty close to the stated specs for the drive. This is quite a bit lower than say a Samsung Pro drive would be. For comparison’s sake, I have run the same test on my own Samsung 970 Pro NVMe drive in my XPS 9380 13″ laptop on the second.

There is more to speed than just benchmarks but you can see the difference in the numbers shown.

One more “benchmark”. I downloaded Fortnite on it since that is one of the games I will be playing (it’s for my son…) on it. I ran on EPIC settings and set framerate to be 120 fps. During the whole game I don’t believe I fell below 65 or so, with the majority of the time running about 80s. Which is better than I could play it on my desktop with a GTX 1060 3GB card with an i7-2600.

Battery Life

Final test was battery life. I ran a movie on loop at 50% brightness and didn’t do anything else but wait. The movie was an MP4 file streamed from the NVMe drive. Even 50% brightness was more than adequate to watch the movie and at 120Hz the movie looked great. All said and done, the laptop was able to get 4:20 on a full battery. Not too shabby for a larger gaming machine. Of course this would drop quite a bit if gaming.


All said and done, so far it is a pretty sweet laptop and for the price a good deal. I find it interesting they included space for another NVMe drive but not a second RAM slot. That, to me, is just weird. I will be running more games on it to fully determine if I will keep it or not, and will update if anyone is interested. If I do decide to keep it I will definitely be swapping the drives. One other thing, it was a bit difficult getting into the case. ASUS did not design this for easy access, in my opinion.

Intro to MongoDB (part 2)

When I left off last part, we were discussing MongoDB’s availability features. We will start next on:

Scalability – We’ve gone over replica sets in availability. For those who come from more of an infrastructure/hardware background. This is very close to something like RAID 1. You are creating multiple copies of a set of data and then placing it on separate physical nodes (instead of drives). This does allow a bit of scalability but is not as efficient as say, RAID 6 is. So, what we will next get into is, sharding. Sharding is a lot closer to what us hardware people would think RAID 6 is. You are breaking pieces of one overall replica and spreading those across physical nodes. For refresher purposes let’s throw in a graphic. I’ve modified it slightly. This diagram makes more sense to me according to the above comparison.

Now if we add in shards this is what we look like.

I have just 3 nodes there but you can scale much bigger. This sharding is automatic and built-in. No need for 3rd party add-ins. Rebalancing is done automatically as you add or subtract nodes. A bit of planning is needed as to how you plan to distribute the data. Sharding is applied based off a shard key. This is defined by a data modeler that describes the range of values that will be used to partition the data into chunks – this is then known as the shard key. Much like you would use a key on a map to tell you how to use it. There are three components to this.

  • Query Router (mongos) – This provides an interface between client apps and the sharded cluster.
  • Config Server – Stores the metadata for the sharded cluster. Metadata includes location of all the sharded chunks and the ranges that define the chunks (each shard is broken into chunks) The Query router cache this data and use it to properly direct where they need to send read and write operations. Config servers also store authentication information and manage distributed locks. These must be unique to a sharded cluster. They store information in a “config” database. Config Servers can be setup as a replica set since keeping these going is necessary for the sharded cluster.
  • Shard – This is a subset of the data. This can be deployed as a replica set.

There is a total of 3 types of sharding strategies. The three types are Range, Hash, and Zone.

  • Ranged Sharding – You would create the shard key by giving it a range and all documents within a range zone would be grouped on the same shard. This approach is great for co-locating data such as all customers within a specific region.
  • Hashed Sharding – Documents are distributed across shards more evenly using a MD5 hash of the shard key, optimizing write performance, and is optimal for ingesting streams of times-series or event data.
  • Zoned Sharding – Developers can define specific criteria for a zone to partition data as needed by the business. This allows much more precise control over where the data is placed physically. If a customer was concerned of data locality this would be a great way to enforce that. Reasons might include GDPR etc.

You can learn more (by watching the MongoDB webinar on sharding here).

Data Security

MongoDB has a number of security features that can be taken advantage of to keep data safe, which is becoming more and more important with the ever-increasing amount of personal information being kept and stored. The main features utilized are:

  • Authentication. MongoDB offers integration with all the main external methods of authentication. LDAP, AD, Kerberos, x.509 Certs. You can take this a step further and implement IP white-listing as well.
  • RBAC. Role-Based Authentication Controls allow for granular user permissions to be assigned. Either to a user or application. Developers can also create specific views just to show pertinent data as needed.
  • Auditing. This will need to be configured but Auditing is offered and can be output to the console, a JSON, or BSON file. The types of operations that can be audited are schema, Replica/Sharded events, authentication and authorization, and CRUD operations (create, read, update, delete)
  • Encryption.
    MongoDB supports both at-rest encryption and transport encryption. Transport encryption is taken care of by support of TLS/SSL certs. They must be a minimum of 128bit key length. As of 4.0 TLS 1.0 is disabled if 1.1 is available. Either self-signed or CA authority certs can be used. Identity verification is also supported for both client and server node members. FIPS is supported but only on Enterprise level. Encryption at-rest is taken care of by a new as of 3.2 storage engine. Default is AES256-CBC.

Next up we will go over a bit of what the hardware should look like.

Intro to MongoDB (part-1)

I don’t like feeling dumb. I know this is a weird way to start a blog post. I detest feeling out of my element and inadequate. As the tech world continues to inexorably advance – exponentially even, the likelihood that I will keep running into those feelings becomes greater and greater. However, to try to combat this, I will have a number of projects to learn new products in the works. Since there is a title on this blog post and I have shortsightedly titled it the tech that I will be attempting to learn, it would be rather anticlimactic to say what is it now. Jumping in….

What is MongoDB?

The first question is what is MongoDB and what makes it different from other database programs out there? Say for example MS SQL or MySQL or PostgreSQL? To answer that question, I will need to describe a bit of the architecture. (And yes, I am learning this all as I go along. I had a fuzzy idea of what databases were and used them myself for many things. But if you asked me the difference between a relational and non-relational DB and I would have had to go to Google to answer you.) The two main types of databases out there are relational and non-relational. Trying to figure out the simple difference between them was confusing. The simplest way of defining it is using the relational model definition from Wikipedia. ” ..all data is represented in terms of tuples, grouped into relations.” The issue with this was it didn’t describe it well enough for me to understand. So, I kept looking for a simpler definition. The one I found and liked is the following, (found on stackoverflow), “I know exactly how many values (attributes) each row (tuple) in my table (relation) has and now I want to exploit that fact accordingly, thoroughly, and to its extreme.” There are other differences – such as how relational databases are more difficult to scale and work with large datasets. You also need to define all the data you will need before you create the relational DB. Unstructured data or unknowns are difficult to plan for and you may not be able to.

So, what is MongoDB then? Going back to our good friend Wikipedia, it is a cross-platform document-oriented database program. It is also defined as a NoSQL Database. There are number of different types of NoSQL though (this is where I really start feeling out of my element and dumb). There are:

  1. Document databases – These pair each key with a complex data structure known as a document. Documents can contain many different key-value pairs, or key-array pairs, or even nested documents.
  2. Graph stores – These are used to store information about networks of data such as social connections
  3. Key-Value stores – are the simplest NoSQL databases. Every single item in the database is stored as an attribute name (or key) together with its value.
  4. Wide column stores – are optimized for queries over large datasets, and store columns of data together, instead of rows. One example of this type is Cassandra

Why NoSQL?

In our speed crazed society, we value performance. Sometimes too much. But still, performance. SQL Databases were not built to scale easily and to handle the amount of data that some orgs need. To this end, NoSQL databases were built to provide superior performance and the ability to scale easily. Things like Auto-Sharding (distribution of data between nodes), replication without third party software or add-ons, and easy scale out, all add up to high performing databases.

NoSQL databases can also be built without a predefined schema. If you need to add a different type of data to a record, you don’t have to recreate the whole DB schema, you can just add that data. Dynamic schemas make for faster development and less database downtime.

Why MongoDB?

Data Consistency Guarantees – Distributed systems occasionally have the bad rap of eventual data consistency. With MongoDB, this is tunable, down to individual queries within an app. Whether something needs to be near instantaneous or has a more casual need for consistency, MongoDB can do it. You can even configure Replica sets (more about those in a bit) so that you can read from secondary replicas instead of primary for reduced network latency.

Support for multi-document ACID transactions as of 4.0 – So I had no idea what this meant at all. I had to look it up. What it means, is that if you needed to make a change to two different tables at the same time, you were unable to before 4.0. NOW you are able to do both at the same time. Think of a shopping cart inventory. You want to remove the item out of your inventory as the customer is buying it. You would want to do those two transactions at the same time. BOOM! Multi-Document transaction support.

Flexibility – As mentioned above, MongoDB documents are polymorphic. Meaning….They can contain different data from other documents with no ill effects. There is also no need to declare anything as each file is self-describing. However……. There is such a thing as Schema Governance. If your documents MUST have certain fields in them, Data Governance will step and in and structure can be imposed to make sure that data is there.

Speed – Taking what I talked about above a bit further in, there are a number of ways and reasons why MongoDB is much faster. Since a single document is the place for reads and writes for an entity, to pull data usually only requires a single read operation. Query language is also much simpler further enhancing your speed. Going even further you can build “change streams” that allow you to trigger actions based off of configurable stimuli.

Availability – This will be a little longer since there is a bit more meat on this one. MongoDB maintains multiple copies of data using a tech called Replica Sets. They are self-healing and will auto-recover and failover as needed. The replicas can be placed in different regions as mentioned previously so that reads can be from a local source increasing speed.

In order to maintain data consistency, one of the members assumes the role of primary. All others acts as secondaries, and they will repeat the operations in the oplog of the primary. If for some reason the primary goes down, one of the secondaries is elected to primary. How does it decide you may ask? I’m glad you asked! It does it based on who has the latest data (based on a number of parameters), who has the most connectivity with the majority of other replicas, and it could use user-defined priorities. This is all happens quickly and is decided in seconds. When the election is done, all the other replicas will start replicating from the new primary. If for some reason the old primary comes back online, it will automatically discover its role has been occupied and will become secondary. Up to 50 members can be configured per replica set.

Well that’s enough for one blog post as I’m about 1200 words already. Next post will continue with sharding and more MongoDB goodness.

Windows Bare Metal Recovery on Rubrik’s Andes 5.0

Rubrik Andes 5.0 is out! There are so many features that have been added and improved upon. One of the many things that has me excited is Bare Metal Recovery. While virtualization has pretty much taken over the enterprise world, there are still reasons to have physical machines. Whether a workload is unable to be virtualized or its presence on a physical machine is a business requirement, there still exists a need for physical protection. So I wanted to do a quick walkthrough to share how Rubrik has improved its ability to perform bare metal recovery (And to go through it myself!).  I won’t go into how to create, or what SLA policies mean here, since there are plenty of good resources (  Lets get started!

In order to create the PE boot disk, you will need to download the ADK from Windows. This will download the PE environment. What is needed is a boot cd that gives you a PowerShell prompt and network access. You can download the Microsoft ADK from this page: or the download link here: What you are downloading here is the setup file that will then download everything else. All the files downloaded will probably be around 7GB. I believe you should be able to use a Windows Server Install CD and go into recovery mode. I have not tested this yet, however.

Once you download the ADK and install that, you will need to download Rubrik’s PE install tool. This is a PowerShell script that will aggregate the files needed and compile them into an ISO that can then be used on a USB key etc.

Download the PE install from Rubrik’s Support site:

(If you don’t already have one, you will need a username and password to log in)

Once downloaded, you need to unzip the files directly on your C drive.

Those two folders will be created.

Now open a Admin PowerShell prompt and run the following – You may need to adjust the beginning to account for where the .ps1 file is.

.\CreateWinPEImage.ps1 -version 10 -isopath C:\WinPEISO -utilitiespath C:\BMR

It will then begin running the ISO creation process.

At the end of this a bootable ISO will be created in the location shown in the figure (you specified in the PowerShell command above C:\WinPEISO).

You will use this CD to boot the machine to restore the physical machine later on.

Normally the next parts would already be done, since you are interested in the restore. I put this in though, since I was creating everything as I was writing this post and someone out there might not be aware of how to protect the machine in preparation for BMR.

After the machine has been installed with Windows and any other software needed, you should install the Rubrik Connector on the machine and add it to the Rubrik Cluster. To do that you need to log into the Rubrik cluster GUI and click on Windows Hosts and then, Add Windows Hosts, as shown here in the picture

You are then presented with a popup window where you can download the Rubrik Connector and install it. After it is installed, click “Add”

The next step is install the Volume Filter Driver. You can do that by clicking on the checkbox in front of the machine and then clicking on the ellipsis in the top right corner. To be clear, you don’t “need” the Volume Filter Driver for this to work, but it does a better job in keeping track of the changes and should allow you to take faster incrementals. It performs the same job as RCT in Windows or CBT in VMware. 

Click on the checkbox in front of the machine. Then click on the ellipsis, and click on “Install VFD”

The host will need to be restarted (as you can see in the picture on the right side. It is really easy to tell if the VFD is installed with the column on the right. It may take a few minutes for Rubrik to update to the fact you rebooted). Once the host is restarted, you will need to take a snapshot of the machine. You can do this by clicking on the Windows Host name itself and clicking the “Take On Demand Snapshot”.

This will launch a short wizard to take the snapshot

I’m going to click on Volumes and then the ‘C’ drive. The next screen is to add the machine to an SLA protection policy. You don’t have to, but you should. This will keep it continually protected according to the SLA you choose. Click on “Finish” and watch the magic occur.

So in case you were wondering….my first error above was because the machine was not part of a domain. Some of the permissions required, need the machine to be part of a domain.

Once the backup has been completed, you will see a dot on the calendar telling you that a snapshot exists on that day.

In order to restore this, click on that dot. You will then see the snapshots available, shown. At the end of the line you have an ellipsis you can click on to take action on it.

Now you CAN choose individual files with the “Recover Files” option but that won’t help you perform a Bare Metal Restore. The option you are looking for is “Mount”. When you do choose “Mount” you will get a new popup. This will show the drives available. When you click on the C: drive and any other one you need, click on “Next”. The next window gives you more options. You can either mount the snapshot on a host (If you are doing just a volume that needs to be restored) or, since we are doing a bare metal restore, click on No Host on the bottom to expose an SMB share with the VHDX file.

In order to preserve security around the share, you will need to enter who is allowed to access the share. You do that by entering in an IP address. ONLY the IP Addresses you input will be able to access the share.. You probably don’t know what IP address you need to put in here yet, so start your physical machine up with the PE CD and then use ipconfig in the open command line window to find the IP to use.

After the drive in Rubrik is mounted, you can find it by going to the Live Mount menu on the side and selecting Windows Volumes. When you hover over the name, it will give you the option of copying the SMB share to your clipboard. When you move your mouse down to the Path all you need to do is click on it to add to your Clipboard.

The bottom image is what the share will show in case you’re curious.

Since your machine has already been started with the PE ISO, the next step is to run the PowerShell command to begin the restoration process. The PowerShell command is shown to you in the window shown above, but here is an example of what you might use:

powershell -executionpolicy bypass -file \\\xm5fuc\without_layout\RubrikBMR.ps1

The part in blue will be just for that mount and need to be changed. Once you hit enter, a flurry of activity will happen and you will see it copying the data over. Grab a coffee or nice single malt scotch – either will do depending on the time of day. I have a video below of what happened after I clicked enter. It restored 22 GB of files on the C:\ drive in about 15 min. This is over a 1Gb Ethernet connection and using a single virtual node. In closing I love this as a feature and feel it is a great feature with the additional refinements made on this version.

Don’t Backup. Go Forward.

3 Months in…..

Slightly over 3 months in now at my first role as Technical Marketing Engineer with Rubrik, Inc and I couldn’t be happier. The job itself is new things often enough, to where I don’t feel bored. And my team is amazing-I couldn’t ask for a more supportive group of people. The more I work with them, the better it gets. The breadth of knowledge and insight they bring to the table help me immensely. As I’m sitting here at my computer on a Friday night feeling thankful, I thought I would do a quick recap of some of the projects I’ve already been working on and things I’ve done.

First month:

Started off with bootcamp and begun learning about the product. Like past roles with most of the companies I’ve worked for, this was once again like drinking from a firehose. To be clear, I do not feel like I have even scratched the surface of the products, even after 3 months. Along with trying to ramp up on the product and forgetting countless names of co-workers (yeah, I’ve got an exceeding bad memory for names and I definitely apologize to all those whose name I’ve forgotten… ever), I also attended VMworld. While there, I caught up with countless friends and starting experiencing the high regard that people hold for Rubrik. I was able to meet customers who also loved the product, and some even went as far as sharing/paying for cab rides with me and of course dinners. I was able to help setup the Rubrik demo labs and felt like I was able to start contributing to the team. I also did my first presentation in conjunction with Pure Storage (at VMworld ever!).

Second month:

Second month, I started warming up with presentations with a product demo and a webinar. Both helped calm some of my jitters about presenting in front of people. I’ve always been nervous about presenting in front of people and imposter’s syndrome. But I also hate that feeling, and was a major reason why I wanted to be (and was) a VMware certified instructor while working at Dell Emc and a large reason for moving to this role. I’ve always been decently introverted and have worked hard to try to come out of my shell. The community has been a large part of what has made it easier for me to do so. During the month, I ended up back at HQ for team meetings and more team-building activities. This is one of the first teams that I’ve worked for that has truly worked hard at bringing their employees together and becoming family. To end the month out, I started preparing for my first VMUG presentation.

Third month:

The third month I traveled to Phoenix, Az for their UserCon there. I gave my first presentation to a…not packed house . It was actually better this way in my opinion since this allowed me to work into this sort of thing a bit easier. I felt more like it was a conversation and tried to get the attendees more involved in the presentation with me instead of it just being a slideshow. The last part of the month was finished off with going back to HQ to work in the lab. I’ve always loved lab work as it presented a clear problem or goal and you could concentrate on that instead of needing to define things first. I admit freely that I’ve confined myself in that sort of environment for too long though and need to work on my creative side. Which is why we are going to try to blog more.

So what’s next on the agenda? First thing, I have a vacation planned. First one in 5 years where I am actually going somewhere. Heading to a remote cabin in Colorado to spend a week. Get some creative juices flowing again and some rest. I will be visiting a few friends from VMware up there and enjoying a few libations with them. Hopefully the ideas for some blog posts will show up and I’ll begin writing those. After that, I’m still doing a ton of learning. Trying to get a lot better at my Spanish and Rubrik’s products along with the products we support (Azure, Hyper-V, AWS, etc.). I’m sure there will be more information coming out of those. To be continued….

New Beginnings

It was with a bit of regret and a small bit of fear that I turned in my 2 weeks’ notice last week. Even though I technically left Dell 2.5 yrs. ago, Dell wasn’t done with me yet and decided to buy the company I moved to. So essentially, I worked for Dell in some capacity for the last 6 yrs. During that time, I did a bit of everything from front-line phone tech to VMware Certified Instructor. I learned a ton of IT that you never really see until you work in larger environments and made some great life-long friends. I really enjoyed teaching and the feeling I may have helped my students along in their career, and because of that, I decided to get more into the education side of IT. To do this, I moved over to EMC to be a Content Developer for the Enterprise Hybrid Cloud solution (1 month after I joined EMC, Dell announced the buyout and I once again became a Dell employee). I helped develop classes for that for a while before going down the path of Lab Architect.

Shortly after I started the Lab Architect role, I was approached with a possibility of blending all the things I love in a single position and with the sweet addition of getting paid for it as well. The training, talking with customers, building POCs, and blogging. I love the idea of trying to help people with the work that I do, and as I get older (ugh) I personally feel that I need to make more a difference with trying to help people. I believe this position will allow me to do that. I greatly appreciate all the help that everyone has given me up to now and continuing. The VMware community is one of the best communities I’ve ever been a part of and God-willing will continue to be part of for a long while.

Putting on a different paragraph for the TL:DR crowd, I have accepted a new role as Technical Marketing Engineer for Rubrik. My last day with Dell/EMC is 7/27. I am looking forward to working with a team of people who I greatly admire and respect. I have a ton of catchup and work to do in the coming months and pray they have the patience for me . I am extremely excited not only about the people that I get to work with but also the product as well. Rubrik has some really cool technology which I plan on delving way deeper into and seems like they really have an awesome vision on how to handle data to make it really easy to manage and control. I look forward to what’s coming….

Tales of a Small Business Server restore……

I know that many of you have gone through your own harrowing tales of trying to bring environments back online. I always enjoy hearing experiences of these. Why? Because these are where learning takes place. Problems are found and solutions have to be found. While my tale doesn’t involve a tremendous amount of learning per se, I feel there are a few things I did discover along the way that may be useful for someone that has to deal with this later. So let’s being the timeline.


The current server is a Microsoft Small Business Server 2011. This server serves primarily as a DNS/File/Exchange server. It houses about 3-400GB of Exchange data, and about 700GB of user data. Now this machine is normally backed up using a backup product called Replibit. This product uses an onsite appliance to house the data and stage for replication to the cloud. So theoretically you will have a local backup snapshot and a remote-site backup. As backups always somehow have challenges associated with them, this seems like an appropriate amount of caution. The server itself is a Dell and is more than robust enough to handle the small business’ needs. There are other issues I would be remiss to not mention. Like the majority of the network is on a 10/100 switch with the single gigabit uplink being used by the SBS server.

Sometime in the wee hours of the morning on

This was when the server was laid low. Don’t know what exactly caused it, as I haven’t performed a root cause analysis yet, and it’s unlikely to happen now. For the future I will be recommending a new course direction for the customer, as I believe there are better options out there now (Office365, standard Windows Server).

I believe that there was some sort of patch that may or may not have happened about the time the machine went down. Regardless, the server went down and did not come back up. It would not even boot in Safe Mode. It would just continually reboot as soon as Windows began to load. Alerts went off notifying of the outage and the immediate action taken was to promote the latest snapshot to a VM on the backup appliance. This is one of the nice features that Replibit allows. The appliance itself runs on customized Lubuntu distro and virtualization duties are handled by KVM. The VM was started with no difficulty, and with a few tweaks to Exchange, (for some reason it didn’t maintain DNS forwarding options) everything was up and running.

After 20 min of unsuccessfully trying to get the Dell server to start in safe mode or Last Known Config, or any mode I could I decided my energies would be better spent just working on the restore. Due to the users working fine and happy on the vm, the decision was made to push the restore to Saturday to minimize downtime and disruption.

Saturday 8:00am…….

As much as I hate to get up early on a Saturday and do anything besides drink coffee, I got up and drove to the companies’ office. An announcement was made the day before that everyone should be out of email and network etc. Then we proceeded to shut down the VM. Using the recovery USB, I booted into the recovery console and attempted to start a restore of the snapshot that the VM was using to run. I was promptly told, “No” by the recovery window. Reason? The ISCSI target could not be created. This being the first time I had used Replibit personally, I discovered how it works is, the appliance creates an ISCSI target out of the snapshot, then uses that to stream the data back to the server being recovered. Apparently when we promoted the snapshot to a Live VM, it created a delta disk with the changes from Wednesday to Saturday morning. The VM had helpfully found some bad blocks on the 6mo old 2TB Micron SSD in the backup appliance, which corrupted the snapshot delta disk. This was not what I wanted to see.

With the help of Replibit support, we attempted everything we could to start the ISCSI target. We had no luck. We then tried creating an ISCSI target from the previous snapshot. This worked. This was a problem however, because we would lose 3.5 days of email and work. Through some black magic and a couple of small animal sacrifices, we mounted the D drive of the corrupted snapshot with the rest of the week’s data (somehow it was able to differentiate the drives inside the snapshot). I was afraid though, that timestamps would end up screwing us with the DB’s on the servers. Due to the lack of any other options though, we decided to press forward. The revised plan now, was to restore the C drive backup from Tuesday night and then try to copy the data from the D drive from the snapshot using WinSCP. We started the restore – it was about 11am-ish on Saturday. We were restoring 128GB of data only, so we didn’t believe that it would take that long. The restore was cooking at first, 2-350MB/min. But as the time wore on….the timer kept adding hours to the estimate and the transfer rate kept dropping. Let’s fast forward.

Sunday 9:20pm

Yes…. 30+hrs later for 130GB of data, and we were done with just the C drive. At this point, we were sweating bullets. The company was hoping to open as usual Monday morning and with those sort of restore times, it wasn’t going to happen. —Would like to send a special shout out to remote access card manufacturers. Dell’s iDRAC in this case. Without which, I would have been forced to stay onsite during this time and that wouldn’t have been fun—Back to the fun. First thing now was to see if the restore worked and the server would come up. I was going to bring it up in safe mode with networking as the main Exchange DB was on the D drive and I didn’t want the Exchange server to try to come up without that. Or any other services that also required files on the D drive for that matter.

The server started and F8 was pressed. “Safe Mode with Networking” was selected and fingers were crossed. The startup files scrolled all the way down through Classpnp.sys and it paused. The hard drives lit up and pulsed like a Christmas tree. 5 min later the screen flashed and “Configuring Memory” showed back up on the screen. “Fudge!” – this is what happened before the restore, just slower. Rebooted, came back to the item selection screen and this time just chose “Safe Mode”. For whatever reason, the gods were smiling on us and the machine came up. First window up by my hand was a command prompt with a SFC /scannow command run. That finished with no corrupt files found (of course) so I moved on. I then created the D drive as it had overwritten the partition table when the C drive was restored. I had no access to the network of course and needed that to continue with the restoration process. Rebooted again and chose “..with Networking” again. This time it came up.

Now we moved on to the file copy. The D drive was mounted on the backup appliance in the /tmp folder (just mounted mind you, not moved there) on the Linux backup appliance. We connected with WinSCP and chose a couple folders and started the copy. Those folders moved fine, so on to some larger ones……Annnnnd an error message. Ok what was the error? File name was too long. Between the path name and the file name, we had files that exceeded 255 chars. This was on basically a 2008r2 Windows server so there was no real help for files that exceeded that. While NTFS file system itself can accept a filename including path of over 32k characters, the Windows shell API can’t. Well crap. This was not going the way I wanted it to. Begin thought process here. Hmmm Windows says it has a hotpatch that can allow me to work around this… This doesn’t help me with the files that it pseudo-moved already though. I can’t move/delete/rename or do any useful thing to those files, whether in the shell or in Explorer. ( I do discover later that I can delete files locally with filenames past 255 char if I use WinSCP to do so. This does create a lock on the folder though so you will need to reboot before you can delete everything) I can’t run the hotfix in safe mode but I don’t really want to start Windows up in normal mode. I don’t have much choice at this point, so I move the rest of the Exchange DB files over to the D drive. This will allow me to start in regular mode without worrying about Exchange. I now go home to let the server finish the copy of about 350ish GB. A text is sent out that the server is not done and informing the company of the status of our work.

Monday morning 8am

The server is rebooted and it comes up in regular mode – BIG SIGH OF RELIEF – the hotpatch files are retrieved and I try to run them. Every one, even though 2008r2 is specifically called out, informs me that they will not work on my operating system. Well this is turning back into a curse-inducing moment.. again. Through a friend, I learn of a possible registry entry that might let us work with long file names – this doesn’t work either. Through my frantic culling through websites in my search for a solution, I find out there are two programs that do not use the Windows API and so are not hampered by that pesky MAX_PATH variable. (I did find there is a SUBST command I could use at the CLI to try to change the name manually. This is not feasible though as one user has over 50k files that would need to be renamed.) Those programs are RoboCopy and Fast Copy. Fast Copy looks a little dated, I know, but as I found out, it worked really well. On to the next hurdle! These tools require a Windows SMB share to work, so we need to mount a Samba share on the backup appliance and reference the mounted snapshot so we can get to it. This works and a copy is setup to test. 5 minutes in…. 10 minutes in… Seems like it’s working. Fast Copy is averaging a little better than 1GB/min transfer speeds as well. Set it up for multiple folders and decide to leave it in peace and go to bed (it is 12am at this point).

Tuesday morning

All files are moved over at this time. Some of them didn’t pull NTFS permissions with them for some odd reason, but no big deal, I’ll just re-create them manually. Exchange needs to be started. Eseutil to the rescue! The DB were shut down in a dirty state. The logs are also located on the C drive. We are able to find the missing logs though and merge everything back together and are able to get the DBs mounted. At this point, there is just a few “mop-up” things to do. There was one user that lost about 4 days of email since she was on a lone DB by herself and it was hosted on the C drive. She wasn’t happy, but not much we could do with a hardware corruption issue unfortunately.

Lessons learned from this are as follows (This list is not all inclusive). You should test the backup solution you are using before you need it. Some things are unfortunately beyond your control though. Corruption on the hardware on the backup device is one of those things which just seems like bad luck. You should always have a Restore Plan B, C, …. however. To go along with this, realistic RPOs and RTOs should be shared with the customer to keep everyone calm. Invest in good whiskey. And MAX_PATH variables suck but can be gotten around with the programs (whose links I included) above. Happy IT’ing to everyone!

Creating a 2 Tier App for testing

It has been a remarkably long time since my last post, and I apologize for that. Things got in the way…Such as my own laziness, job, laziness. You get the idea.

This blog post was conceived because of the lack of posts out there for this. Granted I may just be dense but it took me a while and some help to get this working and I used a previous blow post from another author for this. This post here was used as a template but there were a number of problems and things that were left out that caused me issues. So, I took it upon myself to correct those small things and repost it. In full disclosure, I did try to reach out to the blog author, but have not heard back from him yet.

To start out with a bit about my enviro. I created a couple of VMs using my home lab setup of vSphere 6.5. I don’t have anything fancy in it right now, especially since NSX doesn’t run on 6.5 currently. I started the VMs out with what vSphere automatically provisioned for the VMs, 1 vCPU, 2Gb of RAM, and 16GB HD. This can be reduced of course since I am just using CentOS 6.8 minimal install CD and don’t believe there will be a lot of traffic that they need to handle. I ran through the graphical setup and setup a hostname and IP address on each of the machines. The goal of course is to have these machines eventually be on separate network tiers to test out all the features available to us in NSX, such as micro-segmentation. (of course this is once NSX is supported on 6.5 vSphere)

I am using CentOS 6.8 (which is the latest release on 6.x as of this writing) and the main reason why is that I am more familiar with 6.x than 7. Also Linux is free and easy to deploy and doesn’t take much in the way of resources, providing a perfect OS to use. The first thing we need to do is disable the firewall. This IS a lab environment so I am not too worried about hackers etc., and I will be adding NSX firewalls on them later. To accomplish this, type the following:

service iptables save

service iptables stop

chkconfig iptables off

You will do this for both machines. We will concentrate on the database server first. This is only going to be a 2 tier app. We will have a Database server and a Web/PHP/Wordpress server. You can add more however you want to but this is a good start. Perhaps for the third you could add proxy like the blog post above. Personally, I was just going to put the client machine on it to access the first two. But it is all up to you – it’s your world, and if you want a happy little tree in there, you put one in there. J

Database Server Config

We are going to use MySQL like the original blog.

yum install -y mysql mysql-server mysql-devel

The above line will install all the needed pieces of SQL that we will need. We now need to start the service, set it to run at start up, and go through the small setup of creating a admin password and deciding whether we want a default database in addition to the one we create and if we want to allow anonymous users and remote root login.

service mysqld start

chkconfig mysqld on


Also another thing I should note is that it is much easier to copy and paste my commands. To do this I would recommend using puTTY. We are now going to create our database and set permissions for it.

mysql -u root -p

SELECT User, Host, Password FROM mysql.user;


CREATE USER wp_svc@localhost;

CREATE USER wp_svc@’%’;

SET PASSWORD FOR wp_svc@localhost=PASSWORD(“Password123”);

GRANT ALL PRIVILEGES ON wordpress.* TO wp_svc@localhost IDENTIFIED BY ‘Password123’;

GRANT ALL PRIVILEGES ON wordpress.* TO ‘wp_svc’@’%’ IDENTIFIED BY ‘Password123’;



You can change the above to whatever parameters you wish, just write them down as you will need them later. I also bound MySQL to the IP address you can do that at /etc/my.cnf if you wish. The code is below.


Obviously, you would change the IP address to the one you are using. And that’s it for the DB.

Webserver Config

First thing we need to do on this machine is disable the firewall again. We also need to disable SELINUX since if we don’t, our packets will never leave this machine (something I struggled with and finally got the help of my good friend Roger H. in order to figure out. Shameless plug for him at his blog here – I highly recommend you check him out as he is a brain when it comes to Linux things. So here is the code we need:

service iptables save

service iptables stop

chkconfig iptables off

In order to disable SELINUX from making our life horrible, we are going to set it to Permissive mode. If we fully disable it, it could scream at us. Therefore, use your favorite text editor and edit /etc/sysconfig/selinux file and you want to change the SELINUXTYPE=targeted. It will look like this :

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

# enforcing – SELinux security policy is enforced.

# permissive – SELinux prints warnings instead of enforcing.

# disabled – SELinux is fully disabled.


# SELINUXTYPE= type of policy in use. Possible values are:

# targeted – Only targeted network daemons are protected.

# strict – Full SELinux protection.


# SETLOCALDEFS= Check local definition changes


Next we are going to install a ton of stuff.

yum install -y httpd

chkconfig –levels 235 httpd on

The above installs Apache web server and starts it at machine start up. Next we need to install PHP as this is what WordPress requires to run. We will also install the supporting modules.

yum install -y php php-mysql

yum -y install php-gd php-imap php-ldap php-odbc php-pear php-xml php-xmlrpc php-mbstring php-snmp php-soap php-tidy curl curl-devel wget

Next we will download the latest version of WordPress (as of this scribbling 4.7) and we will then need to unzip it and then copy it over to the webserver www home directory. Then we will need to add the config to point back to the DB server.


tar -xzvf latest.tar.gz

cp -r wordpress/* /var/www/html

cd /var/www/html

cp wp-config-sample.php wp-config.php

Again, using your favorite text editor open the wp-config.php file and change it like below. If you chose different values for your database name and username/password you will need to use that info now.

// ** MySQL settings – You can get this info from your web host ** //

/** The name of the database for WordPress */

define(‘DB_NAME’, ‘wordpress’);

/** MySQL database username */

define(‘DB_USER’, ‘wp_svc’);

/** MySQL database password */

define(‘DB_PASSWORD’, ‘Password123’);

/** MySQL hostname */

define(‘DB_HOST’, ‘’);

/** Database Charset to use in creating database tables. */

define(‘DB_CHARSET’, ‘utf8’);

/** The Database Collate type. Don’t change this if in doubt. */

define(‘DB_COLLATE’, ”);

Once this is done you can go to your website to finish the WordPress install. The address should look something like this. You can use the FQDN or IP address.


When done, your site will be up and ready and look something like this: – CONGRATS