Friday, August 13. 2010
Passing pipes to subprocesses in Python in Windows
Passing pipes around in Windows is a bit more complicated than it is in Unix-like operating systems. This post is a bit of information about how file descriptor inheritance works in Windows and a quick example of how to do it.
The first difficulty to overcome is that file descriptors are not inherited by subprocesses in Windows as they are in Linux. However, OS file handles can be inheritable, and it is possible to retrieve the OS file handle associated with a C file descriptor using the _get_osfhandle function. It is also possible to convert the OS file handle back to a C file descriptor in the child process using _open_osfhandle. However, the OS file handle is not inheritable (see Python Bug 4708 - although on a side note, I don't think they should be default-inheritable since there is no preexec_fn in which to close them and prevent deadlock situations where a child holds the write end of a pipe open, but that is another matter), so to get an inheritable OS file handle it must be duplicated using DuplicateHandle.
The operating system-specific functions described above can be accessed through the _subprocess and msvcrt modules. Their use can be seen in the source for the subprocess module, if desired. However, note that the interface in _subprocess is not stable and should not be depended on, but since there is no alternative (that I am aware of - short of writing another module in C) it is the best solution. Now, an example:
parent.py:
import os
import subprocess
import sys
if sys.platform == "win32":
import msvcrt
import _subprocess
else:
import fcntl
# Create pipe for communication
pipeout, pipein = os.pipe()
# Prepare to pass to child process
if sys.platform == "win32":
curproc = _subprocess.GetCurrentProcess()
pipeouth = msvcrt.get_osfhandle(pipeout)
pipeoutih = _subprocess.DuplicateHandle(curproc, pipeouth, curproc, 0, 1,
_subprocess.DUPLICATE_SAME_ACCESS)
pipearg = str(int(pipeoutih))
else:
pipearg = str(pipeout)
# Must close pipe input if child will block waiting for end
# Can also be closed in a preexec_fn passed to subprocess.Popen
fcntl.fcntl(pipein, fcntl.F_SETFD, fcntl.FD_CLOEXEC)
# Start child with argument indicating which FD/FH to read from
subproc = subprocess.Popen(['python', 'child.py', pipearg], close_fds=False)
# Close read end of pipe in parent
os.close(pipeout)
if sys.platform == "win32":
pipeoutih.Close()
# Write to child (could be done with os.write, without os.fdopen)
pipefh = os.fdopen(pipein, 'w')
pipefh.write("Hello from parent.")
pipefh.close()
# Wait for the child to finish
subproc.wait()
child.py:
import os
import sys
if sys.platform == "win32":
import msvcrt
# Get file descriptor from argument
pipearg = int(sys.argv[1])
if sys.platform == "win32":
pipeoutfd = msvcrt.open_osfhandle(pipearg)
else:
pipeoutfd = pipearg
# Read from pipe
# Note: Could be done with os.read/os.close directly, instead of os.fdopen
pipeout = os.fdopen(pipeoutfd, 'r')
print pipeout.read()
pipeout.close()
Note: For this example to work, python must be on on the executable search path (i.e. in the %PATH% environment variable). If it is not, change the subprocess invocation to include the full path to python.exe. Note also that there is some complication with OS file handle leakage. When a _subprocess_handle object is garbage collected the handle that it holds is closed, unless it has been attached. Therefore, if the subprocess will outlive the scope in which the handle variable is defined, it must be detached (by calling its Detach method) to prevent closing during garbage collection. But this solution has its own problems as there is no way (that I am aware of) to close the handle from the child process, which means a file handle will be leaked.
Above caveats aside, this method does work for passing pipes between parent and child processes on Windows. I hope you find it useful.
Monday, August 9. 2010
BAD_POOL_HEADER in VirtualBox
Thursday, August 5. 2010
Prevent Web Pages from Stealing Focus
I have had enough of web pages arbitrarily changing the cursor focus. Enough of typing half of a URL into the URL bar and half into the text box that stole focus. Enough of typing half of a password into a password box and half into the text box that stole focus. Enough! If you have had enough, check out the NoFocus Greasemonkey script. This script prevents pages from changing focus during page load and, optionally, for the entire life of the page.
This is not a new problem. It has been discussed before on superuser and elsewhere. There are also alternative solutions, such as using Firefox's Configurable Security Policies to disallow focus, or using another Greasemonkey script such as NoFocusForYou and Disable Focus on Window Load. However, NoFocus has several advantages:
- No exceptions are thrown, as with CSPs. Most page authors are not expecting focus to throw exceptions (rightly so), and when this happens it will often completely break the pages by preventing the remainder of the script from running.
- Disables focus immediately (well, in the DOMContentLoaded event, when Greasemonkey runs its scripts). Other scripts only disable focus changing once the load event has fired, which doesn't address focus changes before this event occurrs.
- Disables focus changes for all elements currently on the page and all dynamically created elements. Rather than disabling focus for the elements on the page during load or DOMContentLoaded, this script disables focus for all elements current and future.
- Provides warnings when focus change is prevented. Inevitably, when debugging a page sometime in the future, we will forget that focus changing is prevented (if only briefly) and will fail to understand why a page is not working as expected. NoFocus counteracts this problem by using the Greasemonkey logging to write a warning message into the error log that focus change has been prevented. Making it easier to determine why focus is not performing as expected.
I hope you enjoy the script and enjoy not losing focus!
Sunday, August 1. 2010
State of the company August 1st
The blog has recently started to look like it was written by programming robots. To show that we are actually real life people with feelings and emotions and organs I figured I would write a little about what we have been up to.
Wednesday was the Emerging Technologies Symposium hosted by the Chamber of Commerce. We all went and got to talk with some cool folks. I got us a booth right across from Microsoft thinking Kevin could wage an all-out war in order to release some of the pent up frustration he has had building since he started working with the Microsoft Sync framework. Unfortunately Microsoft sent a non-Microsoft lackey probably realizing full-well that we were planning on ambushing them. Next time Microsoft! Otherwise I thought it was an excellent way to advertise and spend the day. Especially considering the booth only cost $150 and we all got lunch+$25 gift card from HP.
What We Are Doing Right Now
Peter is finishing up work on the heat mapping software he has been working on for NWB Sensors. If you need the best heat camera+software solution that money can buy contact one of the guys at NWB! Kevin has been working hard on the big office management suite he is making for HCS as well as on a super secret project demo. As I mentioned above some of the office management suite functionality was being done using Microsoft Sync but I'm pretty sure Kevin gave up on that and moved back to Git which is great because we now have office t-shirts that say "Git Forked" and this is just one more reason to wear them. I have been working on a PHP script to download MLS files from a repository and add property listings to a Joomanager database as well as create search functionality for said database. Unfortunately because Joomanager is set up to allow you enter any number of fields you want dealing with the database is a gigantic pain. Fortunately it is a kind of fun challenge and I'm almost done!
What We Will Do In The Future (Probably)
This Friday the 6th is our second LAN party. We will be holding it at the Bozeman public library from 3:00 p.m.-Midnight'ish and the theme is StarCraft II but everyone and all games are welcome. Our official LAN website is: http://www.digitalenginesoftware.com/lan/ . Further out I will be helping to teach classes in rural Montana towns on how to setup and market your small business website. This program is being put on by the Tech Ranch and you can find out more at the Orbit Montana website. Peter will be working on our speculative project which is just getting to the exciting parts and Kevin will be planning his revenge. I'll update more next month!
Monday, July 12. 2010
The case of the missing tiff plugin
As a little background to this post, during the course of a recent client project involving image manipulation, I wrote some code to handle images in the tiff format. I used the Java Advanced Imaging (JAI) library's ImageIO class, which made reading from the image file super easy through the use of ImageIO.read(File) which automatically determines the file format based on the extension and performs the file read internally, returning a fully usable BufferedImage object. It really makes reading data from an image super easy, and I highly recommend using it.
While working on the project I needed to write a simple utility to count the number of unique colors in an image and print out how many pixels in the image were that color (useful for debugging the main application). Its a really simple Java program that loops through the image, incrementing the count in a hash map of color values to pixel counts. I copy/pasted the image reading code from the main program, and was surprised to see the test program start generating errors stating that the tiff file could not be read as there was no image reader associated with the tiff file format. Lacking a tiff reader is really surprising because this test utility is just another class file within the same eclipse project as the main application, and therefore has the same class path. Both the jai_core.jar and the jai_codec.jar (the two jars that make up the JAI library) are on the project's class path, so there should be no reason that one java file would have access to them, but another java file does not.
It turns out this was also a problem for gif and jpeg images in versions past, as evidenced by this FAQ question on the JAI home page:
On Solaris, Java Advanced Imaging complains about lack of access to an X server.
Java Advanced Imaging versions previous to JAI 1.1.1 used the AWT toolkit to load GIF and JPEG files. This problem is a manifestation of a JDK bug in which creation of the AWT Toolkit class results in an attempt to open the X display. To work around this problem in Java Advanced Imaging versions prior to 1.1.1, either make an X display available to the Java runtime using the DISPLAY environment variable (no windows will appear on the display), or consider running a dummy X server that will satisfy the AWT, such as the Xvfb utility included with the X11R6.4 distribution.In the JAI 1.1.1 version, the GIF and JPEG decoders were improved to no longer have a dependency on the X server.
The answer it turns out is that my simple utility does not set up the AWT windowing system (since I wrote it as a CLI utilizing System.in) and therefore does not end up loading the tiff image reader plugin because of this fact. It turns out that in order to utilize the tiff reader plugin from the JAI library your code must perform at least one of the following calls:
- Instantiate a JFrame
- Call Toolkit.getDefaultToolkit()
- Call Application.getApplication() (Mac OS X java extension)
While I still don't know the exact nature of the behavior, all evidence points to the fact that in a java program that needs to utilize the tiff image reader, you must set up the AWT windowing system in some manner. Even if your program (like my test utility) doesn't need to create a window or deal with the windowing system in any fashion, you must do one of the above methods in order to correctly register the tiff reader.
Sunday, July 11. 2010
Clone Only HEAD Using Git-SVN
A quick tip for git-svn users: When checking out an SVN repository where only the HEAD revision is desired, the following snippet may be useful:
git svn clone -$(svn log -q --limit 1 $SVN_URL | awk '/^r/{print $1}') $SVN_URL
The above snippet determines the most recent SVN revision number (using svn log) and passes it to git-svn-clone. This can also be added to git as an alias by adding the following to .gitconfig:
[alias]
svn-clone-head = "!f() { git svn clone -`svn log -q --limit 1 $1 | awk '/^r/{print $1}'` $1 $2; }; f"
For checking out the last $N commits, a similar convention can be used:
git svn clone -$(svn log -q --limit $N $SVN_URL | awk '/^r/{rev=$1};END{print rev}') $SVN_URL
Friday, July 9. 2010
HttpHandlers in Virtual Directories on IIS6
Background
I recently encountered an interesting problem related to how Virtual Directories interact with web.config when dealing with an HttpHandler. The website was virtually rooted at "/webapp", physically rooted at "C:\inetpub\wwwroot\webapp". Inside the application, I wanted "files" ("/webapp/files") to be physically rooted on another drive at "E:\files" so that large data files could be stored on a more appropriate drive. Furthermore, I wanted a custom Http Handler which would generate a few files inside "files" on the fly. Initially, this was setup by creating a virtual directory named "files" which pointed to "E:\files", the HttpHandler was placed in App_Code, and web.config contained the following:
<?xml version="1.0"?>
<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
<system.web>
<httpHandlers>
<add verb="GET" path="files/generated.txt" type="WebApp.MyHttpHandler" />
</httpHandlers>
</system.web>
</configuration>
The Problem
As I quickly found out, this doesn't work (for several reasons, as we will see). First, requests for .txt files are not handled by isapi_aspnet.dll (by default), so whatever is done in the ASP.NET code is irrelevant because IIS will not call ASP.NET to handle the request. To fix this problem, the .txt extension can be added to the list of extensions handled by isapi_aspnet.dll (which will cause extra overhead as each request is run through the ASP.NET ISAPI handler, even when the file exists on disk) or the extension of the generated file can be changed to something mapped to isapi_aspnet.dll (like .aspx).
Next, unless the content of "files" is going to be substantially different from the rest of the application, the "files" Virtual Directory must not be a Virtual Application. If the "files" mapping is really a Virtual Application, it will not share code with the parent application so the HttpHandler class will not be found.
Finally, due to how the ASP.NET Configuration File Hierarchy and Inheritance works, web.config will be (essentially) re-applied in the virtual directory, so "files/generated.aspx" will be "files/files/generated.aspx" when considered from inside of the "files" virtual directory. To fix this (while not also creating a "/generated.aspx" alias as well), remove the httpHandlers section in the global web.config and create a web.config inside of the physical directory for "files" with path="generated.aspx"
Once all of the above steps are completed, the generated file should appear correctly and everything should be golden. If not, I strongly recommend replacing the real content of the custom HttpHandler with code that simply writes a string to the response and exits. This way any internal errors in the HttpHandler will not confound any issues with whether or not the handler is being called.
Thursday, July 8. 2010
Microsoft Sync Framework (v2) Not Thread Friendly
As a quick note for other developers that may be getting the same (difficult to understand) error, the Microsoft Sync Framework version 2 is not as thread-friendly as one might expect. The API documentation makes it clear that class instances in the framework are not thread-safe, however, this thread-unsafety goes farther. Even when the instance is protected by proper locking to prevent concurrent access, it may still error when accessed from multiple threads. For example, if a SyncOrchestrator and 2 FileSyncProviders are initialized on one thread and (later) Synchronize is called from another thread, the following exception will be thrown:
System.InvalidCastException: Specified cast is not valid.
at Microsoft.Synchronization.CoreInterop.SyncServicesClass.CreateSyncSession(ISyncProvider pDestinationProvider, ISyncProvider pSourceProvider)
at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWaySyncHelper(SyncIdFormatGroup sourceIdFormats, SyncIdFormatGroup destinationIdFormats, KnowledgeSyncProviderConfiguration destinationConfiguration, SyncCallbacks DestinationCallbacks, ISyncProvider sourceProxy, ISyncProvider destinationProxy, ChangeDataAdapter callbackChangeDataAdapter, SyncDataConverter conflictDataConverter, Int32& changesApplied, Int32& changesFailed)
at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWayKnowledgeSync(SyncDataConverter sourceConverter, SyncDataConverter destinationConverter, SyncProvider sourceProvider, SyncProvider destinationProvider, Int32& changesApplied, Int32& changesFailed)
at Microsoft.Synchronization.KnowledgeSyncOrchestrator.Synchronize()
at Microsoft.Synchronization.SyncOrchestrator.Synchronize()
at DigitalEngine.SyncMgrFileSync.SyncItem.Synchronize() in File.cs:line num
To work around such errors, make sure all sync instances are only accessed from a single thread.
Friday, June 25. 2010
SQL Server Missing from Synchronization Manager
Symptoms
After (re-)creating a subscription to a pull merge replication publication from SQL Server Express 2005, the subscription fails to appear in Synchronization Manager. After further investigation, the symptom was determined to be restricted to non-Administrators.
Things To Check
- Make sure
sp_addmergepullsubscription_agent
was run with@enabled_for_syncmgr = 'TRUE'
. This requirement differs from previous SQL Server versions where this was the default. When this parameter is not set to 'TRUE', the subscription will not appear in Synchronization Manager - Make sure the subscription can be synchronized outside of Synchronization Manager (to confirm that it is a problem when run through replsync.dll - in Synchronization Manager). The easiest way to do this is using replmerg from the command-line.
- Make sure the user has permissions to write to
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\80\Replication\Subscriptions
. Without write permission, SQL Server will silently fail to appear in Synchronization Manager. Also, by default in many configurations, non-Administrators do not have write access to this key so it must be adjusted manually.
The last item is particularly important and required quite a bit of my time to determine... which resulted in the need for this post. Hopefully one of the above suggestions will help you avoid spending the same amount of time that I did to solve this problem.
Sunday, June 20. 2010
Triggers, NOT FOR REPLICATION Identity, and Merge Replication
Initial Note/Warning
I hope that there is a more elegant, simple, and robust method for dealing with the issues described in this post than the one that I present herein. My method is a rather ugly kludge that relies on undocumented features of SQL Server and is not nearly as parallelizable as I would like. If you are aware of a better method (which I hope exists), please don't hesitate to post it in the comments. Until a better method is posted, I invite you to use and/or learn from the method presented in this post.
Background
Consider an SQL Server 2005 database which is published for merge
replication from a single publisher with multiple subscribers. The database
contains 2 tables which we are considering:
Forms
and
InboxForms
.
Forms
contains some sort of form data and
InboxForms
references all forms which are
present in a user's "inbox". Each of the tables contains an
INT IDENTITY
primary key column and several other
columns of data that are not relevant to the problem at hand. The publication
is filtered based on the Forms that each user is able to view (determined by
some separate mechanism not described here). When a new row is inserted into
Forms
, a trigger is used to route the forms
into the appropriate inbox(es).
The Problem
The routing can not occur (exclusively) at the subscribers, because the
filter for the subscriber will not necessarily include
user information for the recipient of the form and the form can not be placed
into InboxForms
if it recipient user does not
exist on the subscriber. So, the trigger must run on the publisher during
synchronization when inserts
are performed by the merge agent (i.e. the trigger must not be marked
NOT FOR REPLICATION
). However, in this configuration,
when the merge agent runs, the following error message is produced:
Explicit value must be specified for identity column in table 'InboxForms' either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column.
The problem is that the INT IDENTITY
column in
InboxForms
was marked NOT FOR
REPLICATION
when the publication was created in order to facilitate
automatic identity range management, which is described in the
Replicating
Identity Columns article in the SQL Server Books Online.
NOT FOR REPLICATION
behaves very similarly to
IDENTITY_INSERT
(as hinted at in the error message), such that when a row is inserted by the
merge agent, the ideneity seed value is not incremented and the value of the
identity column must be explicitly specified. Note, however, that it is not
the same mechanism as IDENTITY_INSERT, so changing IDENTITY_INSERT in the
trigger will not remove the requirement for explicitly specified identity
values.
The Solution
One method to solve this problem is to disable NOT FOR
REPLICATION
, as suggested in
KB908711 (which
specifically addresses this issue). However, using this option will
interfere with automatic identity range management, since the identity values
generated on the subscribers can not be copied to the publisher, and other
steps will need to be taken to manually manage identity values. For me, this
was an unacceptably high price to pay and another solution was required.
A solution which does not interfere with automatic identity range
management is to calculate values for the identity columns and explicitly
specify those values when they are required. In order to accomplish this,
understanding several features of T-SQL is required: In order to determine
when the values are required, the trigger needs to test if it is being run
from the merge replication agent. This can be done by testing for the
'replication_agent'
SESSIONPROPERTY
.
In order to determine appropriate values for the identity column, use
IDENT_CURRENT
and
IDENT_INCR
.
Note that using the maximum value for the identity column is not necessarily
correct because the maximum identity range will not necessarily be allocated
to the publisher.
DBCC
CHECKIDENT
can be used to update the identity seed value (which
is not affected by explicitly inserted identity values).
One other complicating factor in our implementation is that there is no way to atomically insert explicit identity values and update the identity seed value. Therefore, locking is required to prevent multiple connections from simultaneously updating the values and causing collisions. (Or collision errors must be caught and retried) In the following implementation, an exclusive table lock is acquired which prevents any inserts from occurring on the table when the trigger is running. This is a serious performance problem as it prevents any other operations on the locked table from completing while the trigger is executing. Keep this in mind when designing the queries that will run while the lock is held.
Now, without further ado, here's the trigger:
ALTER TRIGGER TR_RouteForms
ON dbo.Forms
AFTER INSERT
AS
BEGIN
-- Client isn't expecting routing counts from their insert
SET NOCOUNT ON;
IF SESSIONPROPERTY('replication_agent') <> 0
BEGIN
-- Running from the replication agent
-- Need explicit value for NOT FOR REPLICATION IDENTITY columns
-- Use transaction to limit lock scope
BEGIN TRAN
-- Variables for IDENT_CURRENT and IDENT_INCR, required
-- because DBCC CHECKIDENT syntax won't support nested parens
DECLARE @Ident INT, @Incr INT;
-- RowCnt used to preserve @@ROWCOUNT
DECLARE @RowCnt INT;
-- Must acquire exclusive lock on InboxForms to prevent other
-- inserts (which would invalidate the identity and cause
-- collisions in the identity column).
-- Select into variable to prevent resultset going to client
-- WHERE clause quickly evaluated, returns small (empty) result
DECLARE @Dummy INT;
SELECT @Dummy = InboxFormID
FROM InboxForms WITH (TABLOCK, XLOCK, HOLDLOCK)
WHERE InboxFormID = 0;
-- Perform the form routing (inserts into InboxForms)
SET @Ident = IDENT_CURRENT('InboxForms');
SET @Incr = IDENT_INCR('InboxForms');
INSERT INTO InboxForms (InboxFormID, FormID, ...)
SELECT @Ident + @Incr * ROW_NUMBER() OVER (ORDER BY FormID) AS InboxFormID, FormID, ...
FROM inserted
WHERE ...routing criteria...
SET @RowCnt = @@ROWCOUNT;
IF @RowCnt > 0
BEGIN
-- At least 1 form was routed, update the identity seed value
-- Note: Can't use MAX(InboxFormID) since publisher may not
-- have been allocated the maximum identity range
SET @Ident = @Ident + @Incr * @RowCnt;
DBCC CHECKIDENT (InboxForms, RESEED, @Ident)
WITH NO_INFOMSGS;
END
COMMIT TRAN
END
ELSE
BEGIN
-- NOT running from the replication agent
-- Can insert normally into NOT FOR REPLICATION IDENTITY columns
-- Perform the form routing (inserts into InboxForms)
INSERT INTO InboxForms (InboxFormID, FormID, ...)
SELECT @Ident + @Incr * ROW_NUMBER() OVER (ORDER BY FormID) AS InboxFormID, FormID, ...
FROM inserted
WHERE ...routing criteria...
END
END
Friday, June 18. 2010
LAN Party version 1.1
Thursday, June 10. 2010
Non-strongly typed attributes in ASP.NET
Saturday, June 5. 2010
LAN for Kids: The Aftermath
Well I just realized that it has been nearly two weeks since LAN for Kids and I hadn’t done a follow up. I am just recovering now so I figured this might be a good time. LAN for Kids was insanely fun! I had such a great time putting it together, thank you to everyone who came out. I would also like to thank our sponsors once again. The Tech Ranch for sponsoring the venue, Michael Clark from Danix who let us use a whole bunch of his networking equipment, Scott Lease from Pepsi who donated a ton of pop, Brandon VanCleeve from Pine Cove Consulting who let us use a couple of their switches, Colter Lease from Propaganda Works who donated a sweet John Belushi print, Stacey Alzheimer from Theraputika who donated an hour long massage, Richard Stallman/The Free Software Foundation for signing the comic, and HeadRoom for the use of their headphones. Go buy their stuff now.
Because you can’t expect to put on a large event without running into problems, and because it’s sometimes funny to hear about those problems afterwards, especially if you weren’t responsible for fixing them, I will mention our minor freak outs briefly. First we woke up to snow. Dirty, evil snow. Fortunately logistics problems had ruled out our plan for an outdoor LAN (hahaha) and we all knew how to man up so the snow didn’t slow us down. Take that Mother Nature--oh snap! We also didn’t have the internets. Well, we had wireless internet but none of the wired jacks worked. Apparently one of the internet trucks broke down in the internet tubes. We thought Steam games might be a lost cause because even though we were hosting our own servers we still had to make an initial login to Steam to host the game. Then, out of nowhere, Kevin swooped in like a ninja and bridged his laptop’s wireless connection, plugged it into the router and BAM, internet for all! Kevin also set up some sweet traffic shaping so our network performed like a champion and a boss. Kevin saved the day, many thanks Kevin! Also big thanks to Ian Nicklin who helped with setup and game hosting on his personal super computer and Peter Nix who also helped with hosting and setup, you guys rule!
The gaming rocked! Games from 20 years ago are the best. We played some huge Quake 3 games, a StarCraft 1 game, a little Left4Dead and finished off the night with Team Fortress 2. Some guys brought an Xbox with Street Fighter and an arcade control pad and played it on the projector. It was RAD. I bought WAY too much pizza. On the up side it meant everyone got (was forced) to take some pizza home at the end of the night. The event ended just before midnight. Unfortunately during all the craziness I completely forgot to bring my camera, so no pictures this time, but trust me, the setup was beautiful and epic and beautifully epic.
Raffle Winners:
XKCD comic signed by Richard Stallman – Chris Webster
John Belushi Print – Carson Welch
1 Hour Massage – Erin Snyder
Wii Guitar Hero Controller – Ian Nicklin’s boss
Overall I/we had an amazing time and I can’t wait to run another. In fact I will be trying to put together some additional equipment so we can have a complete LAN kit in house and we could potentially host smaller events once every month or two. I did learn a couple of things and I think future events will be much cheaper ($5 or possibly free) and we will sell the food and drink. We also need to find some cheaper space but I’m pretty sure we can work that out. Thanks again to everyone that donated, helped, came or just generally supported us, we really appreciated it!
Boring logistical stuff:
The tough part in hosting LAN parties is that there are actually some non-trivial costs for each additional person that attends. Each person needs a decent amount of table, a chair, three or so outlets, a port on a preferably gigabit switch, CAT 5 to the switch, long CAT 5 from each table switch to a central switch which needs to be a fairly large gigabit switch. Fortunately each piece of that equation is a one-time cost. Unfortunately renting a large space can be relatively expensive compared to the number of people the space can host times the amount of money we can reasonably charge them to get in, especially if we want some of that money to go to a charity. If I can get the equipment we will be able to host approximately 50-60 people. Fifty or sixty people are all we could reasonably fit into the Homewood Suites ballroom. I went to nearly every big rental space in Bozeman before we decided on Homewood Suites and the general consensus was $300-$400 for 6 hours to a full day worth of the meeting space. The Homewood Suites was really nice and gave us a discount price of $200 for the whole day which made them SUBSTANTIALLY cheaper than the competition. Even still, assuming we didn’t have a donation to cover the room we would need 40 people at $5 each before we broke even on the space. If we were to rent out the SUB Ballroom, which would be ideal, we couldn’t even cover the cost of the room at $5 per person. However, like I said, we may be able to get some smaller spaces for free in order to host intermediate LANs and then we can maybe splurge once a year for a big meeting space and go crazy.
Wednesday, May 12. 2010
Proper Error Page Handling in ASP.NET
ASP.NET provides a convenient mechanism for configuring error pages through the customErrors element of web.config, which allows developers to select pages to be displayed based on the error code the server would have generated. However, this mechanism has some serious drawbacks. Most importantly, the error code is no longer sent to the browser! For example, when a custom error page is used and a 500 error occurs, instead of sending HTTP status code 500 to the browser, ASP.NET will send a 302 redirect to the browser (and a 200 on the error page, assuming it does not throw an error itself). In my opinion, this is completely wrong and misleading. Although the users see an error page, the software (unless it has custom logic to detect the error page by URL or by title/keyword search) is told that everything is working as expected and that the page has temporarily moved. It's poor practice for search optimization (although I'm sure major search providers have long ago written logic to recognize this sort of misinformation) and it's poor practice for any automated consumers of the site.
To fix this, I highly recommend writing some custom error handling logic. There is lots of useful information to get started in the Rich Custom Error Handling with ASP.NET article on MSDN (as long as the suggestions to use Response.Redirect
are ignored). I recommend that the fundamental component of the error handling should be something like the following (in Global.asax):
void Application_Error(object sender, EventArgs e)
{
// Insert any logging or special handling for specific errors here...
Response.StatusCode = (int)System.Net.HttpStatusCode.InternalServerError;
Server.Transfer("~/Errors/ServerError.aspx");
}
Using the above code, the server will respond with HTTP status code 500 and a useful error page to tell users what happened and what they can do about it. It will also not mess with their browser URL so that they can easily retry the page and/or explain what happened and where to a tech.
Notes:
Note1: Make sure the error page is larger than 512 bytes, otherwise IE and Chrome will not show it in their default settings.
Note2: The HTTP 1.1 status codes and their meanings are described in Section 10 of RFC2616.
Tuesday, May 11. 2010
Making Custom Replication Resolvers Work in SQL Server 2005
Background
SQL Server provides a very convenient method for implementing custom business logic in coordination with the synchronization/merge process of replication. For tasks which need to be done as data is synchronized, or decisions about resolving conflicts which are business-specific, implementing a custom resolver is a surprisingly straight-forward way to go. For more information, check out the following resources:
- How to: Implement a Business Logic Handler for a Merge Article (Replication Programming)
- How to: Implement a COM-Based Custom Conflict Resolver for a Merge Article (Replication Programming)
- BusinessLogicModule Class
Making It Work
Things are never quite as easy as they seem... Chances are, some sort of error message was generated once the DLL was deployed and the instructions in (1) were completed. For example, the following error is common:
Don't Panic. First, check that the DLL has been placed in the directory containing the merge agent on the subscriber (assuming this is being done with pull - place the DLL on the server for push) or registered in the GAC. This message can also indicate dependency problems for the DLL where dependent libraries can't be found/loaded. One way to test that the assembly can be loaded is to compile and run the following program in the same directory as the merge agent:
class Program
{
static void Main(string[] args)
{
TryLoadType(ASSEMBLY_NAME, CLASS_NAME);
// Leave window visible for non-CLI users
Console.ReadKey();
}
static void TryLoadType(string assemblyname, string typename)
{
try
{
Assembly asm = Assembly.Load(assemblyname);
if (asm == null)
{
Console.WriteLine("Failed to load assembly");
return;
}
Type type = asm.GetType(typename);
if (type == null)
{
Console.WriteLine("Failed to load type");
return;
}
ConstructorInfo constr = type.GetConstructor(new Type[0]);
if (constr == null)
{
Console.WriteLine("Failed to find 0-argument constructor");
return;
}
object instance = constr.Invoke(new object[0]);
Console.WriteLine("Successfully loaded " + type.Name);
}
catch (Exception ex)
{
Console.Error.WriteLine("Error loading type: " + ex.Message);
}
}
}
Note: It is very important to correctly determine where the merge agent executable is and where it is being run from when testing. The DLL search path includes both the directory in which the executable file exists and the directory from which it is run (for weak-named assemblies). replmerg.exe usually lives in C:\Program Files\Microsoft SQL Server\90\COM, but mobsync.exe (if you are using Synchronization Manager or Sync Center) is in C:\WINDOWS\system32, and this will have an effect on the assembly search path.
Make sure the names are exactly as they were specified in sp_registercustomresolver
. If the problem was a misnamed assembly or class (because you are like me and fat-fingered the name...) here's how you fix it: sp_registercustomresolver
can be re-run with the same @article_resolver
parameter to overwrite the information for that resolver. This overwrites the information stored in the registry at HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\90\Replication\ArticleResolver
(or a similar location for different versions/configurations). However, if the resolver has already been attached to an article, the information is also stored in sysmergearticles
in the article_resolver
(assembly name), resolver_clsid
(CLSID), and resolver_info
(.NET class name) columns. So, run an UPDATE on these columns to fix errors, as appropriate.
Good Luck!