A place where programmers can discuss various programming topics and experiences.



VMworld 2009 - vSphere SDK Best Practices

I've been back from San Francisco for a week now and finally have gotten around to writing up a note regarding my VMworld 2009 trip.  My trip started off with Monday's Tech Exchange and continued with 3 more days of general VMworld sessions ranging from lower-level API discussions to high-level marketing speak.  If I had to pick a favorite day, it would be the Tech Exchange.  It really is geared toward developers.  You get some great presentations and also a chance to meet those engineers whose APIs you might use (or frequently use as is my case).  Here are some tidbits I picked up regarding performance counter queries in the vSphere Web Services SDK:

// The "entityMORef" variable is a ManagedObjectReferenceType for
// any particular entity we are monitoring (e.g. VirtualMachine,
// ResourcePool, etc.).
// The "perfManager" variable is a reference to the PerformanceManager
// retrieved as a property from the ServiceContent object.
// The "_service" object is the VimService object.

PerfProviderSummary summary =
    _service.QueryPerfProviderSummary(_sic.perfManager, entityMORef);

As in my case, if I'm querying for performance counters for a collection of 1,000 virtual machines, I only need to make this call once. Previously, I'd been invoking this method for all 1,000 virtual machines on the first iteration of the query. As you probably already know, if you are querying for performance counter information on a scheduled basis, there's no reason to make this call on every query iteration. Make the call to QueryPerfProviderSummary() on the first query and then cache those summary objects away (in your PerfQuerySpec object(s)) for use in later queries.

// The "_service" object is the VimService object.
// Let's collect performance metrics for the last 5 minutes.
DateTime serverTime = _service.CurrentTime(serviceRef);
DateTime startTime = _service.AddMinutes(0 - 5);
DateTime endTime = serverTime;

// Query all available metrics for the managed object reference
int dataInterval = 30; // Note: this interval is for real-time metrics
PerfMetricId[] aMetrics = null;
try
{
    aMetrics = 
        _service.QueryAvailablePerfMetric(perfManagerRef,
                                          vmMORef,
                                          startTime,
                                          false,
                                          endTime,
                                          false,
                                          dataInterval,
                                          true);
 }
 catch (Exception)
 {
    // Some exception handling here...
}

PerfQuerySpec spec = new PerfQuerySpec();
spec.entity = vmMORef;
spec.maxSample = 1;
spec.maxSampleSpecified = true;
spec.startTime = startTime;
spec.startTimeSpecified = true;
spec.endTime = endTime;
spec.endTimeSpecified = true;
spec.intervalId = dataInterval;
spec.intervalIdSpecified = true;

// Set your format to "csv"
spec.format = "csv";

This last tidbit will improve your performance queries significantly. The only downside is that you now are required to parse the comma-separated values instead of iterating through an array of objects. I plan on implementing this last portion ASAP b/c I usually am querying in excess of 500 VMs frequently and this will help my query time significantly.

- Gilemonster

Labels:

posted by Gilemonster @ 11:53 AM, , links to this post




VMworld 2009 - San Francisco, CA

I've been working in a virtualized environment now for a good 12 - 14 months and I have to say I'm very impressed with everything.  VMware has really got their stuff together and I have even enjoyed using their vSphere Web Services SDK.  It is complicated to understand at first but once you grasp the high-level layout and structure of their environment it is much nicer than other 3rd party APIs I've used in the past.

Because of my recent work in this area I'm getting to travel to VMworld 2009 in San Francisco, CA this year and I'm stoked.  It should be a real blast and contain tons of information for developers looking to implement solutions provided out of the various SDKs that VMware provides.  I hope to provide daily highlights to let everyone know what is going on (no, I don't believe in twitter)!

- Gilemonster

Labels:

posted by Gilemonster @ 10:40 AM, , links to this post




Bitten by an STL Gotcha...again

I was recently using an STL map container and hit one of those "STL gotchas" that I had forgotten about.  I figured I would pass along what I found and the changes I made.  As everyone already knows an STL map is a sorted associated container which takes a key-value pair. The existing code I had, which was using an std::map, actually contained unique key-value pairs. What I mean is that each pair was only inserted into the map once and the pairs were never updated by inserting the same key twice. When I went to change this code I realized I would need to change the values of existing items in the map. Here's some sample code to explain what I was doing:


typedef std::map<std::wstring, std::wstring> MyMap;
typedef std::pair<std::wstring, std::wstring> MyMapEntry;

// Insert a value with key "MyKey".
MyMap theMap;
theMap.insert(MyMapEntry(L"MyKey", L"MyValue1"));

// Update the value associated with key "MyKey."
theMap.insert(MyMapEntry(L"MyKey", L"MyValue2"));

When I went to run this code and iterated through my map I noticed that the value associated with key "MyKey" was actually "MyValue1"!!! What the heck is going on? Well, the insert() routine is a little tricky here (evil if you ask me). If you are inserting a new item into the map, everything works fine. If you are attempting to overwrite an existing item, it sees that it is there and doesn't update it. Nice... So, I tried the use of operator[] to update my map and this worked. Here's what the second iteration (no pun intended) of the code looked like:


typedef std::map<std::wstring, std::wstring> MyMap;
typedef std::pair<std::wstring, std::wstring> MyMapEntry;

// Insert a value with key "MyKey".
MyMap theMap;
theMap[L"MyKey"] = L"MyValue1";

// Update the value associated with key "MyKey."
theMap[L"MyKey"] = L"MyValue2";

So, now that it is working I should just let it go right? Wrong. I remembered a Scott Meyer's article awhile back where he mentioned how each of the mechanisms for inserting/updating items in a map can affect efficiency based on how you use them. Here's the basic gist of things:

  • insert should only be used to insert new items into a map.
  • operator[] should only be used to update existing items.

But this is obviously a royal pain because how do you know if an item is already in a map without searching for it first? It can't be more efficient to search for an item first and then decide how to insert/update it? This is where lower_bound comes into play. This routine will return an iterator to the first element in a container with a key that is equal to or greater than a given key. Once you get an iterator to that point in the map you can determine if the key really exists by invoking key_comp. If the key matches, you know you've found your existing item and you can update it. If the key doesn't match you now have an iterator to where that item SHOULD be inserted into the map. So this "search" operation is not wasted after all. You can use this location "hint" to insert your new item into the map. Meyers also mentioned that insert will run in amortized constant-time because of the hint. So here's how you would write the earlier example given what we know now:


typedef std::map<std::wstring, std::wstring> MyMap;
typedef std::pair<std::wstring, std::wstring> MyMapEntry;

// Insert a value with key "MyKey".
MyMap theMap;
theMap.insert(MyMapEntry(L"MyKey", L"MyValue1"));

// Assume you don't know whether an item has already been inserted
// after this point *grin*
MyMap::iterator iter = theMap.lower_bound(L"MyKey");
if ((theMap.end() != iter) && 
     !(theMap.key_comp()(L"MyKey", iter->first)))
{
    iter->second = L"MyValue2";
}
else
{
    theMap.insert(iter, MyMapEntry(L"MyKey", L"MyValue2"));
}

Now the example above is pretty simple but when you have code that performs a number of insertions in numerous locations, it is easy to not know whether an item exists in a map. You could write a template function as well (which is what I did). Until next time...

- Gilemonster

Labels:

posted by Gilemonster @ 2:02 PM, , links to this post




Practices of an Agile Developer

I just finished reading Practices of an Agile Developer (by Venkat Subramaniam & Andy Hunt) and I wanted to recommend it to those people who want to learn about the Agile process or just want to help improve their own code quality and performance.  Of all the methodolgies and processes that make up Agile (or rather allow it to fit to your environment) there are a core set of practices that this book points out.  It takes these practices and shows the reader how to apply them to their daily workload.  Here's a great review of the book that I recommend as well (why should I write the review when someone has already done it for me?).  Go out and buy this.  It's worth the $30 price tag.

- Gilemonster

Labels:

posted by Gilemonster @ 12:32 PM, , links to this post




ATL 7.0 String Conversion Classes

If you've ever written Win32 code that is compiled for both ANSI and Unicode you've probably used the ATL 3.0 string conversion classes and their macros (e.g. W2A, A2W, T2A, A2T, etc.). They have been very useful but unfortunately have problems. Microsoft has alleviated a number of these issues in ATL version 7.0. This article gives a brief overview of those fixes and how the use of these classes has improved in version 7.0.

ATL 3.0 string conversion had the following problems which are fixed in version 7.0.

The main reason why ATL 3.0 had issues relates to where strings are stored and when they are freed. All converted strings were stored on the stack and they were not freed until the calling function returned. This means that if you had a routine that never returned (i.e. a separate "watch-dog" thread that never returns unless your application stops running) your converted strings were never freed. This could put tremendous strain on a thread's stack because of how large a string is and how often they are allocated. In version 7.0, the ATL now destructs the string when the object goes out of scope. It also checks the size of the string and if it is too large for the stack, it will store the string on the heap. So, small strings will be stored on the stack, but large ones will be allocated on the heap. Because the strings are destructed when they go out of scope, it is now safe to use the classes in loops because you know that when a loop iteration completes, the string will be destructed. This also makes them safe for use in exception handling code (e.g. catch(MyException &e)). Another nice improvement is the ability to leave that pesky USES_CONVERSION definition out of your code. It always annoyed me and I'm glad to see it go. :-)

Now that we've seen a quick overview of how the new classes are safer, let's look at how to use them because it is drastically different and if used like the older macro code you will get undefined results. If you want to use the new macros, you'll need to change your code. Below is the form of the macros that I stole from the MSDN:

CSourceType2[C]DestinationType[EX]

where:

Here are some simple examples of how to use the new macros. Note: I hate LPCSTR and LPWCSTR so you'll always see me use char * and wchar_t * whenever I can (probably not a good practice though). :-)

// Figure 1:
// Convert a UNICODE string to ANSI.
void 
ConvertUnicodeToAnsi(wchar_t * pszWStr)
{
   // Create a local instance of the CW2AEX class and construct
   // it using a wchar_t *.
   // Note:  Here you will notice that I am using CW2A which is 
   // a typedef macro of the CW2AEX class.
   CW2A pszAStr(pszWStr);

   // Note: pszAStr will become invalid when it goes out of 
   // scope.  In this example, that is when the function
   // returns.
}
// Figure 2:
// How to use a temporary instance of the CA2WEX class.
void
UseTempConvertedString(char * pszAStr)
{
   // Create a temporary instance of the CA2WEX class
   // and use it as a parameter in a function call.
   SomeSampleFunction(CA2W(pszAStr));

   // Note the temporary instance created in the
   // above call is only valid in the SomeSampleFunction
   // body.  Once the function returns, the temporary
   // string is destructed and no longer valid.
}
// Figure 3:
// How NOT to use the conversion macros and classes.  This
// example uses the new classes but applied using the old
// programming style.
void
BadFunction(wchar_t * pszWStr)
{
   // Create a temporary instance of CW2A, save a 
   // pointer to it and then use it.
   char * pszAStr = CW2A(pszWStr);

   // The pszAStr variable in the following line is an invalid pointer,
   // as the instance of CW2A has gone out of scope.
   ExampleFunctionA(pszAStr);
}

Figures 1 and 2 are pretty straight forward, but Figure 3 should be discussed further. In ATL 3.0, this is how we used the conversion classes. It should be noted that this code structure is no longer valid and will produce undefined results. Because of the new scoping of the conversion libraries, an invocation of the CW2AEX constructor cannot be used as we would expect. Figure 3 shows that the pszAStr variable does not contain a valid pointer even though it appears it should. If you need a converted string throughout the scope of a function, you should declare a local instance of the CW2AEX class on the stack and use the appropriate parameters during object construction (e.g. CW2A pszAStr(pszWStr);).

Specify a Custom Buffer Size
The default buffer size for the new ATL classes is 128 characters. If you need to change the default buffer size for certain types of conversions, use the EX macros and specify a new buffer size. This is defined as a C++ template. Here is an example:

// Figure 5:
// Specify a new buffer size with C++ template syntax.
void
UseCustomBufferSize(wchar_t * pszWStr)
{
   // Use a 16-character buffer.
   SomeFunction(CW2CAEX< 16 >(pszWStr));
}

The new ATL 7.0 string conversion classes are a much needed improvement over their 3.0 siblings. Of course you don't have to change all your code to use them if you don't want to. If you are concerned about application performance then you should consider updating your code. You will be able to use the classes in a number of places previously unavailable and that is pretty convenient. You can remove your old "work-around" code because of the safety of the new classes. I plan on looking at my own code and estimating how much it will take to upgrade my ATL usage to version 7.0. I might not be able to make the full change but I am least going to look at what the cost/benefit ratio is. And for new code I'll only use the new 7.0 classes. You should at least consider the same. Until next time...

- Gilemonster

Labels:

posted by Gilemonster @ 12:10 PM, , links to this post