Unidesk and Windows 10 1511 boot hanging

Quick post to point out an excellent article over at the Unidesk support site that doesn’t quite describe all the symptoms you may encounter.

With Unidesk 2.7+ (2.9.4 in my case), if you are capturing Windows 10 build 1511 or greater, after you have captured your OS layer and deploy your first desktop based on that, you may encounter an endlessly spinning Win 10 boot screen.

GlzE5s

Unidesks’ article (found here), talks about the larger picture with how they capture images and provides an easy fix. It’s a pretty interesting peek under the covers of what they are doing during image capture. It is a far more predictive process around the storage side of things than I would have thought. App virtualization has changed so much in the last decade.

Unidesk helpfully points out how you can positively identify this issue in the C:\Windows\inf\setupapi.dev.log file, but I wanted to hopefully associate more Google searches with this excellent fix (it takes about 10 minutes to implement), because while they detailed some scenarios you might run into, this common situation, capturing Windows 10, is not very advertised.

If after capturing a new Windows 10 image in Unidesk and deploying a new desktop from the clean gold image, you find yourself staring at an ever more hypnotic spinning circle, follow the steps in the link!

Happy OS layering!

Netscaler Access Gateway issue

Randomly seen  “Unable to launch application. Contact your help desk with the following information: Cannot connect to the Citrix XenApp server.SSL Error 43: The proxy denied access to ….  port 1494” when using 2 or more PVS based STA servers, running on the same image.

NetscalerSTAissue-pic1

So this is one of those annoying error messages that can mean a great many things.

  • Access gateway licenses might not be installed
  • Mismatched STA’s configured on the Access Gateway and Web Interfaces
  • You could actually have some firewall issues with the XenApp server you are trying to hit.

However in a recent case, all of those factors were fine, and the problem would appear randomly. Usually if any of the above issues are present, it is an all or nothing game. It is either totally broke every time or it works. However in my case it would randomly work. You click on your app in web interface, and if you got the above message and retried a few times it would connect fine. Other times it would connect with no message at all.

So to baseline the configuration,

  • 2x Netscaler 10.0 running Access Gateway
  • 2xXenApp 6.5 HRUP1 servers providing zone data collection for the farm and STA service.
  • KEY POINT : Both STA servers are provided off the same PVS vDisk.

So what is happening is best displayed in the Netscaler config for the Access Gateway virtual server. Go to the Published Applications tab and look at the STA identifiers

NetscalerSTAissue-pic2

Now it is important to note this pic is from AFTER I fixed the issue. When the issue was occurring, both STA Identifiers were identical. Essentially Citrix was expecting the identifier to be unique and was getting confused when both STA servers were responding to the same identifier. The relevant (and fairly new) Citrix Limited Release KB article is here.

http://support.citrix.com/article/CTX137509

The hotfix side of the KB corrects an issue with using the XenApp Server Configuration tool, when you prepare a server for imaging and provisioning. It was not guaranteeing that each server had a unique STA. The second part of the fix is just to set the Citrix XML service to delayed start to ensure it comes up after the NIC does.

So a simple fix for an odd and random problem.

vCenter Accidental lockout! Read-only pitfalls

Recently while working with a customer, we had a single host that was completely read-only for their domain login.

Symptoms were buttons and controls they normally had access to (everything to do with editing) was grayed out.

(Most screenshots of vcenter will be from the web client from here on out, because like it or not, it is going to be the only choice soon)

grayedout

Since all permissions were supposed to be set at the top level of vcenter, and was set to a group, this was puzzling. A quick look at that hosts permissions tab, and we find our culprit.

adminsreadonly

So after some googling I found the explanation here

http://kb.vmware.com/kb/1005680

So the core the issue is a read-only permissions setting overrides an administrator setting for the object. If you set it at the top level you can even totally lockout all administrators from vcenter in one go (depending on how you setup permissions to begin with).

The article is accurate in how to correct the situation, but since a lot of admins I run across are nervous around SQL, I thought a walk-through video might be helpful.

In essence, to fix the issue you just need to update a single table in the vcenter database dbo.VPX_Access. However being a good administrator, you are going to want to backup your database first before editing directly 🙂

Below is a short video walking through editing the table and restoring your access.

Where are the logs?!

Starting to work on some “IT Essentials” training videos for folks in our NOC. In this section we will focus on finding logs for various technologies. Logs are GOLD! Always find the logs!

In this installment we have Windows and SQL logs covered. Apologies as I learn the best way to record these kinds of videos. I can only promise they will get better 😉

Searching Windows Logs

Searching SQL Logs

Exchange 2010 SP1 Stretched DAG node

Just completed an interesting deployment and I thought I would capture a few key points of interest.

The scenario is extending an existing Exchange Database Availability Group to include a copy in an offsite datacenter located over a 10mb WAN with 50ms latency from the local DAG HA pair. Note that this particular scenario does not include a DAG pair in the DR datacenter, just a single node. So we will not be configuring a separate witness fileshare for the DR side.

First off, this KB article from Microsoft goes straight to the heart of the configuration, so if you are looking to go in depth, just click and start reading.

http://technet.microsoft.com/en-us/library/dd638129.aspx

After going through the process, I think I can boil down the process to the following key points

  • When adding new nodes to an existing DAG cluster, especially in another datacenter, server builds may vary. It is exceedingly important that the DAG members match each other in the area of NIC count. They all have to have the same amount of NICs visible to the OS.
  • If you are a straight Exchange admin who does not normally work with network routing, this process does require you to work some with NETSH if you have broken out your replication network from your MAPI network (and you should be doing that!!). Since a given server should only have 1 default gateway (on the MAPI network) your replication network wont have a default gateway. NETSH static routes will establish this for your replication network.
    • Syntax is NETSH interface ipv4 add route <remote replication subnet> “REPLICATION NIC NAME”  <local replication network default gateway>
      • netsh interface ipv4 add route 10.154.200.0/24 “DAG REPLICATION” 10.150.200.1
    • this gets run on every DAG node so they can talk to other replication subnets
  • Unless you are in an unusual situation and have a flat network covering both your primary and your DR site, you will be working with 2 different subnets for your MAPI network. For each MAPI NIC you have in a unique subnet, you must have a unique DAG virtual IP assigned in that network as well, and you must add the DAG IP to the DAG in Exchange Management Console (pictured below)

ExchDAG1

ExchDAG2

  • Add your DAG node from the Exchange management console, not the failover cluster manager.
  • Reboot the new DAG member after adding it to the DAG and make sure to restart an existing Exchange management console session.

That’s it! Pretty easy process once you have gone through it once and very handy for replicating your Exchange data to a safe location. Just be sure your latency is not over 500ms and expect that large mail databases are going to take a while to seed. You have ZERO control over throttling the seed process from Exchange, so if you have to throttle it, you will need to involve your network team and throttle it from the firewall.

Cheers!