Tuesday, November 16. 2010
Failed to map the path '/' After Installing IIS
Over the past weekend I installed SQL Server 2005 Express Reporting Services according to the directions in KB934164 on my Windows 7 machine (which did not go particularly smoothly due to problems with the default Application Pool Identity that caused some Reporting Services Configuration steps to fail... but that is another story). Returning to developing ASP.NET in VS2010 I found that most pages of the site I am working on would no longer load in the ASP.NET Development Server for debugging. The failing pages produced the following error message:
The error could easily be reproduced by calling System.Web.Configuration.WebConfigurationManager.OpenWebConfiguration(HttpContext.Current.Request.ApplicationPath)
.
After a bit of searching, I found several discussions of the problem (or very similar problems) and all of the posts that contained a solution involved running Visual Studio as Administrator (e.g. Gabe Sumner on Sitefinity or Miha Markič on Righthand Blogs). I found this solution unacceptable (for a number of reasons) and decided to dig deeper.
After much tracing and browsing around in .NET Reflector, I determined that System.Web.Configuration.ProcessHostConfigUtils
makes calls to several external functions in System.Web.Hosting.UnsafeIISMethods
which were failing. Although these calls failed, the code in ProcessHostConfigUtils
ignores the failures up until it throws the nearly useless message quoted above (rather than using the failure message from the UnsafeIISMethods
methods). Using reflection, I invoked the external methods directly, then converted the HRESULT to an exception using System.Runtime.InteropServices.Marshal.GetExceptionForHR
and received the following error message:
Finally! Something useful. So I granted my user account read permission on C:\Windows\System32\inetsrv\config and the problem was solved (note that I don't have any sensitive information in this directory which would need to remain private, so I don't have a problem with extra read-only access).
It seems very odd that this file would be loaded at all, given that IIS is not configured with the "shared configuration" feature that uses redirection.config (in fact it has the default configuration in every way). It also seems strange that the code would need to consult the global IIS configuration, although this could be due to any number of reasons (legitimate or due to code which makes incorrect assumptions about IIS and the ASP.NET Development Server co-existing on a machine). However, since all of the code necessary to reproduce the problem is outside of my control, there isn't much more I can do. Enjoy the workaround.
Friday, November 12. 2010
Access Data Pages in Access 2002/2003 on Windows Vista/7
For anyone who has tried creating a blank Access Data Page in Access 2003 (or Access 2002 aka Access XP) on Windows Vista or 7, you will have seen the following message:
Wednesday, November 10. 2010
Access 2003 Not Compatible with Access Runtime 2010
Just a quick warning: Don't install Access Runtime 2010 on any computer where you are still using Access 2003 (this may apply to other version combinations as well). I recently installed the latest version of SQL Server Migration Assistant for Access, which requires the Microsoft Access 2010 Runtime. After installing the runtime, creating an event procedure would cause Access 2003 to crash. With a bit more testing, I found that the Visual Basic Editor was automatically creating a reference to the "Microsoft Access 14.0 Object Library" (that came with the 2010 Runtime) instead of the "Microsoft Access 11.0 Object Library" (that came with Access 2003) and it would not allow me to change this library. After removing the 2010 runtime, all is back to normal.
Sunday, October 17. 2010
CSS3 Flexible Boxes Don't Always Work
While working on a web page for a web-based intranet application, I ran into the problem of vertically sizing an HTML element such that it fills the remaining vertical space within its parent (given an unknown amount of space already filled by its preceding siblings). Although may sound like a simple problem, I have yet to find a CSS-only solution. There is a very useful (although now a bit dated) discussion of the problem on Patrick van Bergen's Blog, although the Internet Explorer portion of his solution can not be used in IE8 Strict Mode due to the removal of dynamic properties. However, this post lead me to discover the CSS3 Flexible Box Layout Module.
The purpose of this post is not to describe the flexible box layout module or how to use it, for that there are many excellent resources such as The CSS 3 Flexible Box Model on Mozilla Hacks and Introducing the Flexible Box Layout module on CSS3.info, the point of this post is to highlight one important caveat of flexible boxes in Firefox: They don't work when positioned absolutely. This fact isn't mentioned in the documentation that I have come across (or, if it is, I have managed to overlook it multiple times) and I am unsure if it is a bug or intended behavior. In either case, it can be very confusing for developers who are new to flexible boxes and unsure what to expect. For a quick example of the problem, consider the following code:
<div style="border: 2px solid green; display: -moz-box; height: 5em; -moz-box-orient: vertical">
<div style="background-color: red">Child 1</div>
<div style="background-color: blue; -moz-box-flex: 1">Child 2</div>
</div>
This produces the following output (best viewed in a Gecko-based browser):
Yet if we make the container absolutely positioned (and place in a relatively positioned container so it appears below this text) we get the following:
Keep this in mind if you are working with flexible boxes.
Friday, August 13. 2010
Passing pipes to subprocesses in Python in Windows
Passing pipes around in Windows is a bit more complicated than it is in Unix-like operating systems. This post is a bit of information about how file descriptor inheritance works in Windows and a quick example of how to do it.
The first difficulty to overcome is that file descriptors are not inherited by subprocesses in Windows as they are in Linux. However, OS file handles can be inheritable, and it is possible to retrieve the OS file handle associated with a C file descriptor using the _get_osfhandle function. It is also possible to convert the OS file handle back to a C file descriptor in the child process using _open_osfhandle. However, the OS file handle is not inheritable (see Python Bug 4708 - although on a side note, I don't think they should be default-inheritable since there is no preexec_fn in which to close them and prevent deadlock situations where a child holds the write end of a pipe open, but that is another matter), so to get an inheritable OS file handle it must be duplicated using DuplicateHandle.
The operating system-specific functions described above can be accessed through the _subprocess and msvcrt modules. Their use can be seen in the source for the subprocess module, if desired. However, note that the interface in _subprocess is not stable and should not be depended on, but since there is no alternative (that I am aware of - short of writing another module in C) it is the best solution. Now, an example:
parent.py:
import os
import subprocess
import sys
if sys.platform == "win32":
import msvcrt
import _subprocess
else:
import fcntl
# Create pipe for communication
pipeout, pipein = os.pipe()
# Prepare to pass to child process
if sys.platform == "win32":
curproc = _subprocess.GetCurrentProcess()
pipeouth = msvcrt.get_osfhandle(pipeout)
pipeoutih = _subprocess.DuplicateHandle(curproc, pipeouth, curproc, 0, 1,
_subprocess.DUPLICATE_SAME_ACCESS)
pipearg = str(int(pipeoutih))
else:
pipearg = str(pipeout)
# Must close pipe input if child will block waiting for end
# Can also be closed in a preexec_fn passed to subprocess.Popen
fcntl.fcntl(pipein, fcntl.F_SETFD, fcntl.FD_CLOEXEC)
# Start child with argument indicating which FD/FH to read from
subproc = subprocess.Popen(['python', 'child.py', pipearg], close_fds=False)
# Close read end of pipe in parent
os.close(pipeout)
if sys.platform == "win32":
pipeoutih.Close()
# Write to child (could be done with os.write, without os.fdopen)
pipefh = os.fdopen(pipein, 'w')
pipefh.write("Hello from parent.")
pipefh.close()
# Wait for the child to finish
subproc.wait()
child.py:
import os
import sys
if sys.platform == "win32":
import msvcrt
# Get file descriptor from argument
pipearg = int(sys.argv[1])
if sys.platform == "win32":
pipeoutfd = msvcrt.open_osfhandle(pipearg)
else:
pipeoutfd = pipearg
# Read from pipe
# Note: Could be done with os.read/os.close directly, instead of os.fdopen
pipeout = os.fdopen(pipeoutfd, 'r')
print pipeout.read()
pipeout.close()
Note: For this example to work, python must be on on the executable search path (i.e. in the %PATH% environment variable). If it is not, change the subprocess invocation to include the full path to python.exe. Note also that there is some complication with OS file handle leakage. When a _subprocess_handle object is garbage collected the handle that it holds is closed, unless it has been attached. Therefore, if the subprocess will outlive the scope in which the handle variable is defined, it must be detached (by calling its Detach method) to prevent closing during garbage collection. But this solution has its own problems as there is no way (that I am aware of) to close the handle from the child process, which means a file handle will be leaked.
Above caveats aside, this method does work for passing pipes between parent and child processes on Windows. I hope you find it useful.
Monday, August 9. 2010
BAD_POOL_HEADER in VirtualBox
Thursday, August 5. 2010
Prevent Web Pages from Stealing Focus
I have had enough of web pages arbitrarily changing the cursor focus. Enough of typing half of a URL into the URL bar and half into the text box that stole focus. Enough of typing half of a password into a password box and half into the text box that stole focus. Enough! If you have had enough, check out the NoFocus Greasemonkey script. This script prevents pages from changing focus during page load and, optionally, for the entire life of the page.
This is not a new problem. It has been discussed before on superuser and elsewhere. There are also alternative solutions, such as using Firefox's Configurable Security Policies to disallow focus, or using another Greasemonkey script such as NoFocusForYou and Disable Focus on Window Load. However, NoFocus has several advantages:
- No exceptions are thrown, as with CSPs. Most page authors are not expecting focus to throw exceptions (rightly so), and when this happens it will often completely break the pages by preventing the remainder of the script from running.
- Disables focus immediately (well, in the DOMContentLoaded event, when Greasemonkey runs its scripts). Other scripts only disable focus changing once the load event has fired, which doesn't address focus changes before this event occurrs.
- Disables focus changes for all elements currently on the page and all dynamically created elements. Rather than disabling focus for the elements on the page during load or DOMContentLoaded, this script disables focus for all elements current and future.
- Provides warnings when focus change is prevented. Inevitably, when debugging a page sometime in the future, we will forget that focus changing is prevented (if only briefly) and will fail to understand why a page is not working as expected. NoFocus counteracts this problem by using the Greasemonkey logging to write a warning message into the error log that focus change has been prevented. Making it easier to determine why focus is not performing as expected.
I hope you enjoy the script and enjoy not losing focus!
Sunday, July 11. 2010
Clone Only HEAD Using Git-SVN
A quick tip for git-svn users: When checking out an SVN repository where only the HEAD revision is desired, the following snippet may be useful:
git svn clone -$(svn log -q --limit 1 $SVN_URL | awk '/^r/{print $1}') $SVN_URL
The above snippet determines the most recent SVN revision number (using svn log) and passes it to git-svn-clone. This can also be added to git as an alias by adding the following to .gitconfig:
[alias]
svn-clone-head = "!f() { git svn clone -`svn log -q --limit 1 $1 | awk '/^r/{print $1}'` $1 $2; }; f"
For checking out the last $N commits, a similar convention can be used:
git svn clone -$(svn log -q --limit $N $SVN_URL | awk '/^r/{rev=$1};END{print rev}') $SVN_URL
Friday, July 9. 2010
HttpHandlers in Virtual Directories on IIS6
Background
I recently encountered an interesting problem related to how Virtual Directories interact with web.config when dealing with an HttpHandler. The website was virtually rooted at "/webapp", physically rooted at "C:\inetpub\wwwroot\webapp". Inside the application, I wanted "files" ("/webapp/files") to be physically rooted on another drive at "E:\files" so that large data files could be stored on a more appropriate drive. Furthermore, I wanted a custom Http Handler which would generate a few files inside "files" on the fly. Initially, this was setup by creating a virtual directory named "files" which pointed to "E:\files", the HttpHandler was placed in App_Code, and web.config contained the following:
<?xml version="1.0"?>
<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
<system.web>
<httpHandlers>
<add verb="GET" path="files/generated.txt" type="WebApp.MyHttpHandler" />
</httpHandlers>
</system.web>
</configuration>
The Problem
As I quickly found out, this doesn't work (for several reasons, as we will see). First, requests for .txt files are not handled by isapi_aspnet.dll (by default), so whatever is done in the ASP.NET code is irrelevant because IIS will not call ASP.NET to handle the request. To fix this problem, the .txt extension can be added to the list of extensions handled by isapi_aspnet.dll (which will cause extra overhead as each request is run through the ASP.NET ISAPI handler, even when the file exists on disk) or the extension of the generated file can be changed to something mapped to isapi_aspnet.dll (like .aspx).
Next, unless the content of "files" is going to be substantially different from the rest of the application, the "files" Virtual Directory must not be a Virtual Application. If the "files" mapping is really a Virtual Application, it will not share code with the parent application so the HttpHandler class will not be found.
Finally, due to how the ASP.NET Configuration File Hierarchy and Inheritance works, web.config will be (essentially) re-applied in the virtual directory, so "files/generated.aspx" will be "files/files/generated.aspx" when considered from inside of the "files" virtual directory. To fix this (while not also creating a "/generated.aspx" alias as well), remove the httpHandlers section in the global web.config and create a web.config inside of the physical directory for "files" with path="generated.aspx"
Once all of the above steps are completed, the generated file should appear correctly and everything should be golden. If not, I strongly recommend replacing the real content of the custom HttpHandler with code that simply writes a string to the response and exits. This way any internal errors in the HttpHandler will not confound any issues with whether or not the handler is being called.
Thursday, July 8. 2010
Microsoft Sync Framework (v2) Not Thread Friendly
As a quick note for other developers that may be getting the same (difficult to understand) error, the Microsoft Sync Framework version 2 is not as thread-friendly as one might expect. The API documentation makes it clear that class instances in the framework are not thread-safe, however, this thread-unsafety goes farther. Even when the instance is protected by proper locking to prevent concurrent access, it may still error when accessed from multiple threads. For example, if a SyncOrchestrator and 2 FileSyncProviders are initialized on one thread and (later) Synchronize is called from another thread, the following exception will be thrown:
System.InvalidCastException: Specified cast is not valid.
at Microsoft.Synchronization.CoreInterop.SyncServicesClass.CreateSyncSession(ISyncProvider pDestinationProvider, ISyncProvider pSourceProvider)
at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWaySyncHelper(SyncIdFormatGroup sourceIdFormats, SyncIdFormatGroup destinationIdFormats, KnowledgeSyncProviderConfiguration destinationConfiguration, SyncCallbacks DestinationCallbacks, ISyncProvider sourceProxy, ISyncProvider destinationProxy, ChangeDataAdapter callbackChangeDataAdapter, SyncDataConverter conflictDataConverter, Int32& changesApplied, Int32& changesFailed)
at Microsoft.Synchronization.KnowledgeSyncOrchestrator.DoOneWayKnowledgeSync(SyncDataConverter sourceConverter, SyncDataConverter destinationConverter, SyncProvider sourceProvider, SyncProvider destinationProvider, Int32& changesApplied, Int32& changesFailed)
at Microsoft.Synchronization.KnowledgeSyncOrchestrator.Synchronize()
at Microsoft.Synchronization.SyncOrchestrator.Synchronize()
at DigitalEngine.SyncMgrFileSync.SyncItem.Synchronize() in File.cs:line num
To work around such errors, make sure all sync instances are only accessed from a single thread.
Friday, June 25. 2010
SQL Server Missing from Synchronization Manager
Symptoms
After (re-)creating a subscription to a pull merge replication publication from SQL Server Express 2005, the subscription fails to appear in Synchronization Manager. After further investigation, the symptom was determined to be restricted to non-Administrators.
Things To Check
- Make sure
sp_addmergepullsubscription_agent
was run with@enabled_for_syncmgr = 'TRUE'
. This requirement differs from previous SQL Server versions where this was the default. When this parameter is not set to 'TRUE', the subscription will not appear in Synchronization Manager - Make sure the subscription can be synchronized outside of Synchronization Manager (to confirm that it is a problem when run through replsync.dll - in Synchronization Manager). The easiest way to do this is using replmerg from the command-line.
- Make sure the user has permissions to write to
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\80\Replication\Subscriptions
. Without write permission, SQL Server will silently fail to appear in Synchronization Manager. Also, by default in many configurations, non-Administrators do not have write access to this key so it must be adjusted manually.
The last item is particularly important and required quite a bit of my time to determine... which resulted in the need for this post. Hopefully one of the above suggestions will help you avoid spending the same amount of time that I did to solve this problem.
Sunday, June 20. 2010
Triggers, NOT FOR REPLICATION Identity, and Merge Replication
Initial Note/Warning
I hope that there is a more elegant, simple, and robust method for dealing with the issues described in this post than the one that I present herein. My method is a rather ugly kludge that relies on undocumented features of SQL Server and is not nearly as parallelizable as I would like. If you are aware of a better method (which I hope exists), please don't hesitate to post it in the comments. Until a better method is posted, I invite you to use and/or learn from the method presented in this post.
Background
Consider an SQL Server 2005 database which is published for merge
replication from a single publisher with multiple subscribers. The database
contains 2 tables which we are considering:
Forms
and
InboxForms
.
Forms
contains some sort of form data and
InboxForms
references all forms which are
present in a user's "inbox". Each of the tables contains an
INT IDENTITY
primary key column and several other
columns of data that are not relevant to the problem at hand. The publication
is filtered based on the Forms that each user is able to view (determined by
some separate mechanism not described here). When a new row is inserted into
Forms
, a trigger is used to route the forms
into the appropriate inbox(es).
The Problem
The routing can not occur (exclusively) at the subscribers, because the
filter for the subscriber will not necessarily include
user information for the recipient of the form and the form can not be placed
into InboxForms
if it recipient user does not
exist on the subscriber. So, the trigger must run on the publisher during
synchronization when inserts
are performed by the merge agent (i.e. the trigger must not be marked
NOT FOR REPLICATION
). However, in this configuration,
when the merge agent runs, the following error message is produced:
Explicit value must be specified for identity column in table 'InboxForms' either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column.
The problem is that the INT IDENTITY
column in
InboxForms
was marked NOT FOR
REPLICATION
when the publication was created in order to facilitate
automatic identity range management, which is described in the
Replicating
Identity Columns article in the SQL Server Books Online.
NOT FOR REPLICATION
behaves very similarly to
IDENTITY_INSERT
(as hinted at in the error message), such that when a row is inserted by the
merge agent, the ideneity seed value is not incremented and the value of the
identity column must be explicitly specified. Note, however, that it is not
the same mechanism as IDENTITY_INSERT, so changing IDENTITY_INSERT in the
trigger will not remove the requirement for explicitly specified identity
values.
The Solution
One method to solve this problem is to disable NOT FOR
REPLICATION
, as suggested in
KB908711 (which
specifically addresses this issue). However, using this option will
interfere with automatic identity range management, since the identity values
generated on the subscribers can not be copied to the publisher, and other
steps will need to be taken to manually manage identity values. For me, this
was an unacceptably high price to pay and another solution was required.
A solution which does not interfere with automatic identity range
management is to calculate values for the identity columns and explicitly
specify those values when they are required. In order to accomplish this,
understanding several features of T-SQL is required: In order to determine
when the values are required, the trigger needs to test if it is being run
from the merge replication agent. This can be done by testing for the
'replication_agent'
SESSIONPROPERTY
.
In order to determine appropriate values for the identity column, use
IDENT_CURRENT
and
IDENT_INCR
.
Note that using the maximum value for the identity column is not necessarily
correct because the maximum identity range will not necessarily be allocated
to the publisher.
DBCC
CHECKIDENT
can be used to update the identity seed value (which
is not affected by explicitly inserted identity values).
One other complicating factor in our implementation is that there is no way to atomically insert explicit identity values and update the identity seed value. Therefore, locking is required to prevent multiple connections from simultaneously updating the values and causing collisions. (Or collision errors must be caught and retried) In the following implementation, an exclusive table lock is acquired which prevents any inserts from occurring on the table when the trigger is running. This is a serious performance problem as it prevents any other operations on the locked table from completing while the trigger is executing. Keep this in mind when designing the queries that will run while the lock is held.
Now, without further ado, here's the trigger:
ALTER TRIGGER TR_RouteForms
ON dbo.Forms
AFTER INSERT
AS
BEGIN
-- Client isn't expecting routing counts from their insert
SET NOCOUNT ON;
IF SESSIONPROPERTY('replication_agent') <> 0
BEGIN
-- Running from the replication agent
-- Need explicit value for NOT FOR REPLICATION IDENTITY columns
-- Use transaction to limit lock scope
BEGIN TRAN
-- Variables for IDENT_CURRENT and IDENT_INCR, required
-- because DBCC CHECKIDENT syntax won't support nested parens
DECLARE @Ident INT, @Incr INT;
-- RowCnt used to preserve @@ROWCOUNT
DECLARE @RowCnt INT;
-- Must acquire exclusive lock on InboxForms to prevent other
-- inserts (which would invalidate the identity and cause
-- collisions in the identity column).
-- Select into variable to prevent resultset going to client
-- WHERE clause quickly evaluated, returns small (empty) result
DECLARE @Dummy INT;
SELECT @Dummy = InboxFormID
FROM InboxForms WITH (TABLOCK, XLOCK, HOLDLOCK)
WHERE InboxFormID = 0;
-- Perform the form routing (inserts into InboxForms)
SET @Ident = IDENT_CURRENT('InboxForms');
SET @Incr = IDENT_INCR('InboxForms');
INSERT INTO InboxForms (InboxFormID, FormID, ...)
SELECT @Ident + @Incr * ROW_NUMBER() OVER (ORDER BY FormID) AS InboxFormID, FormID, ...
FROM inserted
WHERE ...routing criteria...
SET @RowCnt = @@ROWCOUNT;
IF @RowCnt > 0
BEGIN
-- At least 1 form was routed, update the identity seed value
-- Note: Can't use MAX(InboxFormID) since publisher may not
-- have been allocated the maximum identity range
SET @Ident = @Ident + @Incr * @RowCnt;
DBCC CHECKIDENT (InboxForms, RESEED, @Ident)
WITH NO_INFOMSGS;
END
COMMIT TRAN
END
ELSE
BEGIN
-- NOT running from the replication agent
-- Can insert normally into NOT FOR REPLICATION IDENTITY columns
-- Perform the form routing (inserts into InboxForms)
INSERT INTO InboxForms (InboxFormID, FormID, ...)
SELECT @Ident + @Incr * ROW_NUMBER() OVER (ORDER BY FormID) AS InboxFormID, FormID, ...
FROM inserted
WHERE ...routing criteria...
END
END
Thursday, June 10. 2010
Non-strongly typed attributes in ASP.NET
Wednesday, May 12. 2010
Proper Error Page Handling in ASP.NET
ASP.NET provides a convenient mechanism for configuring error pages through the customErrors element of web.config, which allows developers to select pages to be displayed based on the error code the server would have generated. However, this mechanism has some serious drawbacks. Most importantly, the error code is no longer sent to the browser! For example, when a custom error page is used and a 500 error occurs, instead of sending HTTP status code 500 to the browser, ASP.NET will send a 302 redirect to the browser (and a 200 on the error page, assuming it does not throw an error itself). In my opinion, this is completely wrong and misleading. Although the users see an error page, the software (unless it has custom logic to detect the error page by URL or by title/keyword search) is told that everything is working as expected and that the page has temporarily moved. It's poor practice for search optimization (although I'm sure major search providers have long ago written logic to recognize this sort of misinformation) and it's poor practice for any automated consumers of the site.
To fix this, I highly recommend writing some custom error handling logic. There is lots of useful information to get started in the Rich Custom Error Handling with ASP.NET article on MSDN (as long as the suggestions to use Response.Redirect
are ignored). I recommend that the fundamental component of the error handling should be something like the following (in Global.asax):
void Application_Error(object sender, EventArgs e)
{
// Insert any logging or special handling for specific errors here...
Response.StatusCode = (int)System.Net.HttpStatusCode.InternalServerError;
Server.Transfer("~/Errors/ServerError.aspx");
}
Using the above code, the server will respond with HTTP status code 500 and a useful error page to tell users what happened and what they can do about it. It will also not mess with their browser URL so that they can easily retry the page and/or explain what happened and where to a tech.
Notes:
Note1: Make sure the error page is larger than 512 bytes, otherwise IE and Chrome will not show it in their default settings.
Note2: The HTTP 1.1 status codes and their meanings are described in Section 10 of RFC2616.
Tuesday, May 11. 2010
Making Custom Replication Resolvers Work in SQL Server 2005
Background
SQL Server provides a very convenient method for implementing custom business logic in coordination with the synchronization/merge process of replication. For tasks which need to be done as data is synchronized, or decisions about resolving conflicts which are business-specific, implementing a custom resolver is a surprisingly straight-forward way to go. For more information, check out the following resources:
- How to: Implement a Business Logic Handler for a Merge Article (Replication Programming)
- How to: Implement a COM-Based Custom Conflict Resolver for a Merge Article (Replication Programming)
- BusinessLogicModule Class
Making It Work
Things are never quite as easy as they seem... Chances are, some sort of error message was generated once the DLL was deployed and the instructions in (1) were completed. For example, the following error is common:
Don't Panic. First, check that the DLL has been placed in the directory containing the merge agent on the subscriber (assuming this is being done with pull - place the DLL on the server for push) or registered in the GAC. This message can also indicate dependency problems for the DLL where dependent libraries can't be found/loaded. One way to test that the assembly can be loaded is to compile and run the following program in the same directory as the merge agent:
class Program
{
static void Main(string[] args)
{
TryLoadType(ASSEMBLY_NAME, CLASS_NAME);
// Leave window visible for non-CLI users
Console.ReadKey();
}
static void TryLoadType(string assemblyname, string typename)
{
try
{
Assembly asm = Assembly.Load(assemblyname);
if (asm == null)
{
Console.WriteLine("Failed to load assembly");
return;
}
Type type = asm.GetType(typename);
if (type == null)
{
Console.WriteLine("Failed to load type");
return;
}
ConstructorInfo constr = type.GetConstructor(new Type[0]);
if (constr == null)
{
Console.WriteLine("Failed to find 0-argument constructor");
return;
}
object instance = constr.Invoke(new object[0]);
Console.WriteLine("Successfully loaded " + type.Name);
}
catch (Exception ex)
{
Console.Error.WriteLine("Error loading type: " + ex.Message);
}
}
}
Note: It is very important to correctly determine where the merge agent executable is and where it is being run from when testing. The DLL search path includes both the directory in which the executable file exists and the directory from which it is run (for weak-named assemblies). replmerg.exe usually lives in C:\Program Files\Microsoft SQL Server\90\COM, but mobsync.exe (if you are using Synchronization Manager or Sync Center) is in C:\WINDOWS\system32, and this will have an effect on the assembly search path.
Make sure the names are exactly as they were specified in sp_registercustomresolver
. If the problem was a misnamed assembly or class (because you are like me and fat-fingered the name...) here's how you fix it: sp_registercustomresolver
can be re-run with the same @article_resolver
parameter to overwrite the information for that resolver. This overwrites the information stored in the registry at HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\90\Replication\ArticleResolver
(or a similar location for different versions/configurations). However, if the resolver has already been attached to an article, the information is also stored in sysmergearticles
in the article_resolver
(assembly name), resolver_clsid
(CLSID), and resolver_info
(.NET class name) columns. So, run an UPDATE on these columns to fix errors, as appropriate.
Good Luck!