vCenter Accidental lockout! Read-only pitfalls

Recently while working with a customer, we had a single host that was completely read-only for their domain login.

Symptoms were buttons and controls they normally had access to (everything to do with editing) was grayed out.

(Most screenshots of vcenter will be from the web client from here on out, because like it or not, it is going to be the only choice soon)


Since all permissions were supposed to be set at the top level of vcenter, and was set to a group, this was puzzling. A quick look at that hosts permissions tab, and we find our culprit.


So after some googling I found the explanation here

So the core the issue is a read-only permissions setting overrides an administrator setting for the object. If you set it at the top level you can even totally lockout all administrators from vcenter in one go (depending on how you setup permissions to begin with).

The article is accurate in how to correct the situation, but since a lot of admins I run across are nervous around SQL, I thought a walk-through video might be helpful.

In essence, to fix the issue you just need to update a single table in the vcenter database dbo.VPX_Access. However being a good administrator, you are going to want to backup your database first before editing directly 🙂

Below is a short video walking through editing the table and restoring your access.

Where are the logs?!

Starting to work on some “IT Essentials” training videos for folks in our NOC. In this section we will focus on finding logs for various technologies. Logs are GOLD! Always find the logs!

In this installment we have Windows and SQL logs covered. Apologies as I learn the best way to record these kinds of videos. I can only promise they will get better 😉

Searching Windows Logs

Searching SQL Logs

Exchange 2010 SP1 Stretched DAG node

Just completed an interesting deployment and I thought I would capture a few key points of interest.

The scenario is extending an existing Exchange Database Availability Group to include a copy in an offsite datacenter located over a 10mb WAN with 50ms latency from the local DAG HA pair. Note that this particular scenario does not include a DAG pair in the DR datacenter, just a single node. So we will not be configuring a separate witness fileshare for the DR side.

First off, this KB article from Microsoft goes straight to the heart of the configuration, so if you are looking to go in depth, just click and start reading.

After going through the process, I think I can boil down the process to the following key points

  • When adding new nodes to an existing DAG cluster, especially in another datacenter, server builds may vary. It is exceedingly important that the DAG members match each other in the area of NIC count. They all have to have the same amount of NICs visible to the OS.
  • If you are a straight Exchange admin who does not normally work with network routing, this process does require you to work some with NETSH if you have broken out your replication network from your MAPI network (and you should be doing that!!). Since a given server should only have 1 default gateway (on the MAPI network) your replication network wont have a default gateway. NETSH static routes will establish this for your replication network.
    • Syntax is NETSH interface ipv4 add route <remote replication subnet> “REPLICATION NIC NAME”  <local replication network default gateway>
      • netsh interface ipv4 add route “DAG REPLICATION”
    • this gets run on every DAG node so they can talk to other replication subnets
  • Unless you are in an unusual situation and have a flat network covering both your primary and your DR site, you will be working with 2 different subnets for your MAPI network. For each MAPI NIC you have in a unique subnet, you must have a unique DAG virtual IP assigned in that network as well, and you must add the DAG IP to the DAG in Exchange Management Console (pictured below)



  • Add your DAG node from the Exchange management console, not the failover cluster manager.
  • Reboot the new DAG member after adding it to the DAG and make sure to restart an existing Exchange management console session.

That’s it! Pretty easy process once you have gone through it once and very handy for replicating your Exchange data to a safe location. Just be sure your latency is not over 500ms and expect that large mail databases are going to take a while to seed. You have ZERO control over throttling the seed process from Exchange, so if you have to throttle it, you will need to involve your network team and throttle it from the firewall.


PCoIP and USB Mic misbehaving with “Follow-Me” desktop


I’ve been working on “Follow-me” desktop solutions lately, especially VMware View. Working through different workflows and use cases, I ran into a pretty vexing peripheral issue. As it stands, the PCoIP protocol has an issue with how it handles bi-directional audio through a USB microphone.

One of the new features of recent PCoIP releases has been support for isochronous USB devices to be connected. This type of device has certain bandwidth guarantee associated, and is given privileged status essentially. Things like microphones! You have to have a great connection or your voice input comes out sounds all garbled and scratchy when put over the network (connecting your physical desktop to your virtual desktop). This actually works quite well and I have to give PCoIP props for how well it performs. A good guide to the basics of getting things rolling is found here.

However, one particular use case makes apparent a big problem right now with PCoIP and USB microphones, the “Follow-me” desktop. Normal behavior for USB redirection goes along the lines of….

  1. Disconnect device from host
  2. Reconnect device to target virtual desktop

This is the Virtual USB hub included with the View client making this happen. When you disconnect your physical computer from your View desktop, the following step is supposed to happen

  1. Disconnect device from target virtual desktop
  2. Reconnect device to host

This is all because only one system can “own” a USB device at once. However the last step is not happening for USB microphones when connecting via PCoIP. The process works correctly for RDP connections oddly enough, however this will not work correctly with some commonly used apps for a USB mic, like Dragon Naturally Speaking.

What you will observe follows this pattern….

  1. First connection from a “fresh” host PC to a Virtual desktop ends with the microphone working.
  2. The user logs off of their View desktop
  3. Any user that then tries to logon from that desktop will be unable to connect to the microphone. You will see an error similar to “Cannot connect <device name> It may be in use by another application”.
  4. If you unplug the USB microphone and plug it back in to the physical PC, you will see the USB composite device in a disabled state in device manager.

There is a VMware KB regarding this issue here that describes in great detail what error messages you might see, and describes their current official workaround.

The core of the solution, if you can call it that right now, is to remove the disabled USB Composite device corresponding to your mic, from the device manager, while it is plugged in and after it has “failed” and will no longer connect to View desktops. This does reset the device so that it can successfully work with another virtual desktop again, but far too detailed for an average user to do on a near constant basis.

If you want to script this to take a lot of the pain out, Microsoft actually has a nifty command line utility that lets you manipulate the device manager. DevCon is the name, and it can be a lot of fun….but can have….ahh…unfortunate side effects if you aren’t careful.

Regardless, if used correctly, you can fire off DevCon to remove the specific VID/PID of the USB mic from the host PC(specifically the USB composite device). However the end users would still need to physically unplug the mic and plug it back in, before the device would be fully reset and ready to connect to another virtual session.

Feelers are out to Teradici and VMware to see if there is a more elegant solution to be found. It sure would be nice to be able to treat USB microphones as equal citizens in the mobile virtual world!

Trend Deep Security Manager and Windows XP Guests


I’ve been putting together a Trend Micro DSM implementation. DSM connects to appliances on each ESX host, which communicate through vShield endpoints on the VM guests, to provide “agentless” anti-virus/malware ( I guess we arent supposed to count the vShield endpoint install). It’s a pretty nifty system in that you can really lighten the load that comes from running anti-virus/malware for your VDI environments, by moving the AV scan load from the individual guests to the host directly.

One thing that is not clearly called out and, of course, turns out to be crucial, is the SCSI compatibility at the endpoint. Since a lot of view deployments are still Windows XP based (usually 32 bit), don’t get caught in the trap of building up a perfect slimmed down, disk aligned gorgeous image, and then realizing you need to change the SCSI driver. Trend Micro DSM currently only supports LSI Logic SCSI drivers and VMWare Para-virtualized, not Buslogic and not IDE drivers. By default with ESX 4.1, Win XP 32bit gets IDE disks and Buslogic SCSI drivers if any SCSI device is added. Changing a SCSI driver, while doable in some circumstances, is never fun and usually ends in a rebuild of the OS.

Use a SCSI disk for your XP View parent image and make sure the driver is LSI Logic Parallel. Otherwise your carefully crafted DSM environment will report all of it’s subsystems and appliances working perfectly, and your endpoint will show a “Filter Driver Offline” error.

A question of continuity

So stick with me for a moment I’m an applications oriented guy, unlike a lot of my storage oriented brethren here at Varrow…But I want to make the argument that the question of how do we preserve our information, is one of the oldest questions on the planet.

Almost 30,000 years ago, our ancestors were trying to preserve knowledge of their way of life. They delved into the darkness under the mountains, to the most secure place they could think of. They left later generations some amazing snapshots of their life.

And so it went, the great experiment. We devised constant new methods to store our race’s DNA, our history and culture. The constant struggle to retain information. Egyptian papyrus scrolls reveal to us now, 4000 years later, a complex math and science body of knowledge, with great chunks smudged or torn out.

To think in the dark ages, there were monks whose sole task on this earth, was to lovingly copy the contents of one book into another. This was considered holy work, to preserve knowledge in a dark time.

Somehow we survived, and now that’s all old school. Everything is digital now. We are all set right? Well maybe not. Optical and magnetic media has a pretty short shelf life. CD/DVD/DLT tape lasts 5-30 years if you are very lucky. My sons’ copy of Mario racing lasted 1 year. But the times they are a-changing.

Which brings me to the present where we as a race are finally solving one of the most ancient problems, how do we preserve our information? Storage companies are finally bridging the gap to allow for geographically dispersed storage movement, without interruption of service. Just as the internet provided a common bus for communication between storage and processing pools, technology such as EMC’s vPlex are enabling this kind of federation at the storage level. All without interruption.

Eventually, as this technology embraces all storage platforms, we are going to end up with storage “networks” that cover the entire country. You will be able to move your entire data center in real time from one state to another.

This is just the beginning of what is possible. As bandwidth becomes ever more ubiquitous, your storage array becomes just as flexible as your blade servers in moving around live resources.  Information is constantly refreshed. Even at the consumer level, the availability of the “public” cloud allows for the preservation of your family album.

Your critical data that your business lives and dies off of is guaranteed a steady cycle of constant refresh, without having to deal with all the hassle of managing the risk of a potential massive service interruption. It becomes a non-issue. Constant data refresh just HAPPENS. For managers of large implementations, even with ITIL guidelines, once the new environment is tested and certified, the actual migration is yawn inducing. Not a terrifying journey into the depths of what caffeine can do to a person at 4 in the morning.

This is the kind of future we are building towards. No more “my family pictures were lost in the fire”. No more “a burst water main destroyed our DC”. We are at the point of creating a continually renewing global record of mankind. One that can withstand the usual random chaos we have all seen so closely. It isn’t a record etched in titanium and buried in the desert, or a painting buried 1000 feet below the earth in a cave in the south of france. What we are working towards, and are seeing the first sprouts of success now with EMC vPlex, is a vision of truly federated storage, eventually across vendors. Where entire virtual infrastructure’s can move from one storage platform to another in real time.

And as our scientific research delves into how we can store data at ever shrinking levels, and as we sit on the verge of a sea change in how active data is stored, the problem our ancestors faced of “how do I preserve this wicked cool sabertooth drawing”, is reaching its ultimate conclusion.  We are achieving a platform for which any piece of data can be maintained in an active (accessible) and renewable state.

Pretty cool time to be in IT!!

An End to a Cycle

Since the days of the above, the way we interact with computers has moved kind of like the tide. Years after this 30 ton monster got done revolutionizing the science of a postwar US, we moved into a world dominated by big iron. At work was a giant IBM mainframe and at home was…well…nothing. Come on! We’re talking the 50’s-70’s. Sweet headphones was your best bet.

Our computing power was centralized. This is the first age of what we now call “cloud” computing. Every bit of data in one massive repository. A pure era. Perfection. Even the programming code was constructed by pretty much straight up scientists. This was the garden of eden for computing for the human race.

And then……this……

and the great exodus began. Eden was shattered. The march to the edge. Because you know what? That’s freaking cool.

This commenced, and then proceeded nearly uninterrupted, from this point to about 5 years ago. Computing resources steadily flowed more and more into the end point rather than the center.

We may suppose someone asking:

“Why? Why would you move data so far apart? We were in a pure state with all the info nice and snug together. Why would we want to leave the nest?”

To which everyone on earth would respond “Do you see how sexy those pictures above are?!?!”

A massive explosion with what you could do with a computer occurred. Suddenly the accounting department could get a months’ worth of work done in a week, and at home, kids were driving parents crazy with requests for Atari’s and Nintendo’s and Playstations. Some people were writing little logo scripts for a tiny turtle, but we’re not talking about that!

For the decades that followed, more and more data was processed on the ever shrinking box under your desk. The sad carcasses of the central servers stood dumbly in the center, relegated to shuffling around emails and accounting database query responses. Sometimes, when one got really lucky, they got to run a batch job!! Forgotten hulks, in heavily air-conditioned rooms (if they were lucky).

And then…almost completely unobserved by humanity, in 1972 some of the brightest humans on the planet, opted for <python>something, completely different.</python>


(The little internet that could)

And with the creation of this little wonder and some clever conversation protocols, the flow to the edge reversed. Suddenly the horse-blinders were off, and the sound of screeching modem connection strings were ringing.

This slowly, inexorably, brings us back to today. The giant systems back in the data center are suddenly waking up and running applications again. As our science progresses, ever smaller devices are capable of ever increasing feats. All aided by nearly instantaneous communication completely leveling the playing field. We are evolving to the stage where it doesn’t matter where you do your computing. The information simply flows where it needs to be.

So I put before you, that we stand on the precipice of a new age. Suddenly the way you use computers at home is no different from the way you use them at work. Just as you pop onto an app store to grab a useful little widget for your phone, or a hilarious video on youtube, you can pop into your virtual desktop at work, from anywhere in the world. In a moment you can grab whatever app you need in a seamless environment, always at your bidding, anywhere you are.

Suddenly the walls are falling away, and in their wake lies the opportunity to reinvent and reimagine how things should work. The subtle earthquake is already under our feet, but it’s there.

We are coming to a state where the cycle of back and forth between central computing and decentralized computing truly is forming a “cloud” of computing resources that we can draw from, whether at home or at work.

It is one strange case of marketing-speak actually matching reality. We truly are developing in a way that resembles a cloud. Simply a secure, and normally controlled cloud.

This blog is meant to be part exposition of the interesting and the odd that I (and hopefully others), come across as we all begin really adapting to the new environment.

Full disclosure mode engage! I’m a systems engineer type. I’ve worked with what you could call “converged” systems for years now. I see how things are changing in the workplace at nearly the same pace that they change in our homes. Our tools are evolving at an ever astonishing pace.

Suddenly we can allow a single person to maintain and grow a massive number of servers. With the right software it can be done with practically no effort. At the same time, in our homes if we want to hear the latest hot track, or suddenly want to listen again to an old classic, that is a mere 1 minute and 99 cents away at any time. In the sweet living room stereo if you want it.

We now can say, “Hey highly trained college grad! Instead of the boring old company PC, we’ll just give you X amount of money towards whatever hot item you want!”. And in the background a normally stressed out security admin is peacefully snoring in the breakroom, as all the applications and data are delivered in a secured encrypted pocket from which not even light could escape.

There are right now, certain vendors that are rapidly blurring this line, giving companies and consumers, a flexibility and reliability that simply never existed before. There are companies whose sole purpose and passion, is to help businesses be the best at harnessing this change, and the next generation. I happen to work for one of these companies, but I have no doubt, they are everywhere throughout the world.

It’s a truly interesting time that will present many opportunities to those that can see it. Especially in an environment where you need to bring in the best and the brightest, those companies that can make the business life as easy as the home life will thrive.

The cycle is over. The server and the client fought it to a draw and everybody won. There are no more rules about where your computing happens, there is only opportunity in deciding how it happens. It’s just like the Oklahoma land rush. It’s a wide open field to figure out how we work in the future.

Game on