Calling ASP.NET REST Web API From Mirth Connect

I had a scenario given to/architected at me where a Mirth interface needed to call a .NET web service. The items to be sent were the name and contents of a file. In this case, Mirth isn’t being used as it usually is (a health care HL7 integration engine), but used as a directory monitor.

Mirth will pick up and process new files. In this case, it’s a customer order from a scanner/fax machine, and saved as a .tif file. This file needs to be sent to the web application. It is my task to build the sender and web service.

The hopelessly out of date “accepted best practice” for web services at this customer site is still ASMX SOAP web services. Inertia is tough to break for some people.

Out with the old, and hello .NET Web API. Let’s build and use HTTP REST services.

Data Flow

  • Mirth detects a new file
  • reads the file’s contents, Base64 encodes
  • sends the contents and file name to a web service. We’ll POST the contents via HTTP.
  • web service converts file to another format
  • web service saves its work somewhere (database, file system, whatever your workflow is)
  • Mirth archives/deletes the original file

Receiver - .NET Web API Class

Instead of a SOAP ASMX web service, use a REST Web API Controller. The file can be POSTed to a method.

The file being transmitted was a .tif, but really it could be any file type. It’s expecting the file contents to be incoming as Base64 encoded in the request body, along with the file name.

Create a class to encapsulate all the data that will be required. I called mine CustomerFile.

Note: only one param is allowed in the method signature when using FromBody attribute on the param. This is by design in Web API.

public class DocumentsController : ApiController
{
  public void Post([FromBody]CustomerFile file)
  {
      byte[] incomingBytes = Convert.FromBase64String(file.FileContentsBase64);
      using (var incomingFile = new MemoryStream(incomingBytes))
      using (var tif = Bitmap.FromStream(incomingFileStream))
      using (var tifStream = new MemoryStream())
      {
          tif.Save(tifStream, ImageFormat.Tiff);
          file.PngBytes = ConvertTifToPng(tif);
          SaveIncomingFile(file);
      }
  }
}
public class CustomerFile
{
    public string FileName { get; set; }
    public string FileContentsBase64 { get; set; }
}

The endpoint will use the Web API defaults for route and method name: https://localhost:6788/api/documents. The method name is Post, and the route is api/<controller name>/. In my case it’s documents.

Sender - Mirth Interface

We’ll use Mirth’s HTTP Sender destination. Use HTTP instead of a Web Service Sender, which deals with sending SOAP messages).

I used Raw as the data type in the inbound and outbound templates. Using default HL7 v2.x results in bad. Make sure both Source and Destination message types are Raw.

Source Transformer:

channelMap.put("fileBase64Contents", getFileBase64());
channelMap.put("fileName",sourceMap.get('originalFilename'));

function getFileBase64(){
	var fileBytes = getFileBytes();
	return FileUtil.encode(fileBytes);
}

function getFileBytes(){
	var filePath = buildFilePath();
	return FileUtil.readBytes(filePath);
}

function buildFilePath(){
	var filePath = sourceMap.get('fileDirectory') + "\\" + sourceMap.get('originalFilename');
	return filePath.replace(/\\\\/g, "\\");
}

Destination

We simply use those values stored in the channel variables in a JSON payload to the web service.

The key here is that:

  • the JSON has keys that map exactly to the properties on CustomerFile. FileName and FileContentsBase64.
  • the channel variables are put within quotes. The engine will substitute their values within the quotes.
{
  "FileName":"${fileName}",
  "FileContentsBase64":"${fileBase64Contents}"
}

Destination Properties

  • Channel Type: HTTP Sender
  • URL: http://localhost:1982/api/documents where documents is the class name of your Web API controller.
  • Method: POST - choose the right method for your scenario. I upvoted POST vs PUT
  • Content Type: application/json
  • Content: your JSON with channel variables within (as above)

Read More

Visualize Your Tweets In A Word Cloud

There’s a bot that will read your tweets, aggregate the word by frequency, and create a word cloud.

Tweet something/anything @wordnuvola and include the hashtag # wordcloud

The response came an hour later, and it was enjoyable to read.

The source for the bot is on GitHub, and it’s written in Python. It’s a great read regardless of never having written Python.

I’m surprised the noisewords/stopwords list has 712 English words, and none are swear/offensive.

Read More

Overcast First Thoughts / Mini Review

Marco Arment released a podcast player iPhone app - Overcast. Marco has had his own podcast series on 5by5 (Build And Analyze), and now is part of Accidental Tech Podcast. Clearly he’s passionate about the podcasting industry. Marco blogged about Overcast’s release.

I loved this app immediately. It gets so many details right.

Playback

  • Large buttons on playback. NICE! Nothing’s worse than small touch targets.
  • the scrubber displays large, and is a large touch target. Interesting that is doesn’t have variable-speed scrubbing (which I meh on anyway).
  • an EQ display in 3 places lets you know what’s playing: in the main podcast list view, in the podcast details view, and in the now-playing view. I first considered it eye candy, but realized that it was important feedback in drawing your attention to the currently playing podcast.
  • each podcast can have its own playback speed, and I use this feature. It’s available in most advanced podcast apps, and I’m happy to see it here.
  • on the play screen, I accidentally scrolled the artwork up, and the notes appeared. Hooray. It wasn’t immediately obvious, and I frequently use show notes. You can also tap the image and the notes will scroll automatically.
  • the smart speed feature appears to minimize gaps of silence. The number keeps changing even as I hear people talk. I would love to hear more about what’s going on underneath.
  • both the lock screen and control center show your configured forward and reverse skip times. This isn’t default behaviour on iOS - the developer has to explicitly include this, and I’ve seen devs/apps screw up the lock screen displays & controls. Nice touch.
  • notably missing is a volume slider. I sometimes use the volume slider on Downcast, but I’m sure I’ll get used to the physical buttons on the phone.

  • the main podcasts list screen has a perfect balance of album art and meta info. I prefer to see the art as my main navigational cue, and
  • the mini player is visible at the bottom ALL THE TIME. Nice.

  • instead of buttons, you can swipe from the left, and it’s much quicker than the back buttons are.
  • menus animate quickly
  • settings are in the right place where you need them. Allow Cellular Downloads is on the downloads page, not buried in the settings page.
  • The podcast episodes list includes both sort-by directions. Useful for finding as many older episodes as the podcast’s RSS feed provides.

Discovery

  • The + (add) button at the top of the screen allows you to add podcasts either by RSS, search, or by a list of recommendations.
  • Twitter integration opens up a bunch of neat features. Once you allow Overcast to read your Twitter following list, it displays those people’s favorited podcast episodes as ‘recommendations from Twitter’. So as you click the Favorite button, either in the show notes, or via the share sheet, you basically help create good signal for others.

Other Benefits

You can manage and play your podcast playlist on the Overcast website too. Here’s where the account creation is important. Episodes retain their play position. The level of integration and details taken care of on v1 are impressive. It’s unbelievably well polished, and sets a really high bar for ease of use. All the other podcast apps look instantly dated and fugly.

I’ve uninstalled Downcast already after 5 mins of using Overcast. #sorryNotSorry, Downcast. Today your app is stuck in the iOS 6 shiny textured look and feel, and is super-cluttered.

If you’re already using an existing podcast app, Overcast can read OPML files from those apps. You won’t have to spend 20 minutes doing the wasteful search/re-subscribing to podcast feeds in Overcast. No question the best experience I’ve had transitioning from an old to a new podcast app.

Background on Overcast

Marco talks a bit about why he develops apps, and the reason he started creating a podcast app.

Read More

Tech Partisanship Is Stupid

Tech Partisanship

It’s time to stop with the partisanship in tech. It’s a silly and fruitless activity that doesn’t get us anywhere. It either reinforces your beliefs, or alienates others.

I’ve observed a few interactions with techy people (enthusiasts, developers, IT pros) recently, and over the span of years, where the discussion revolved around mobile phones, their operating systems, computer operating systems, cloud services, etc. I hear things like:

  • ewwww, why would you use Android?! It’s like a cesspool of malware.
  • a Mac? I don’t get it, why use a computer with one-button mouse?
  • Ruby? Doesn’t it let you overwrite a variable with another type? That’s insane!
  • iPhone just locks you into using a dumb interface with no customization. You can’t tweak anything! Why do you follow the crowd?
  • we need to get more Windows Phones going on around here. It’s more modern.
  • Windows sucks. Microsoft marketshare is plummeting; M$ is dead in this post-PC era.

I don’t ever want to be engaged in these discussions, and I’ll actively disconnect if one arises. What’s the point? I’m not sure if it’s a maturity-level issue, or geek-boastery, or what.

Tech Tribalism

It’s useful to understand why tech enthusiasts feel or show allegiance to tech products and/or their companies. My bottom line on this is that people like to justify their decisions: both financial and emotional.

When you make a decision in a personal tech product, that decision involves a financial commitment. That might be an initial outlay of money, or an agreement to pay a large sum spanning multiple years. Either way, it’s a large undertaking, and you want to get maximum value from this decision.

When you’re choosing to spend a large sum of money, you might go through a few thought processes. You’re basically making a mental investment in your potential choice.

  • is this the right product for me?
  • do I have an escape plan if it doesn’t work out?
  • balance the pros and cons of this choice

By the time the decision-making process is over, you’re convinced that this choice is right for you, and you want to avoid paying again for that decision and having to go through it all again.

Remember that these companies are large multi-national corporations. They do not care about you. They care about you choosing to open your wallet and buy their next product. The most obvious example is Apple, who is incredibly successful in marketing their products. They use emotion, values, and aspiration to colour your perception and subtly convince you that their product aligns with you. It’s Not a Church, It’s Just an Apple Store. In their case, it’s imparting your values with their brand, their logo, and their products.

Tech Partisanship

Companies love loyal customers, but why do you want or need to be a loyal customer? I understand the basic human need to be a part of a group and find acceptance, and that you benefit by sharing experiences, tips and tricks, etc. I also understand that tech companies help encourage you to stay in their product silos by offering features or advantages. This helps reinforce your platform choice.

Here’s where I think things go sideways in the tech geek: they’re predisposed to defending their investment, and are now vindicated in their choice, as they’ve gained more benefits in that choice. People can turn from tribalism to partisans, and actively shut their brains off. I’m not sure what it is about the tech enthusiast personality, but it seems more susceptible to partisanship than normals are. Mix that with some snark and it doesn’t present well.

The subtle point: all platforms are innovating and evolving, and all users within those platforms are benefiting (more or less) at the same rate. This results in a set of warring factions, each proclaiming that Feature X is superior to Opposing Product Y. That may be true… for that person at that time.

Fast forward ten years, and it all seems silly.

Try All Slices.

A well rounded tech worker stays on top of technology. Some even consider it their job. When a new technology comes on the scene, you should take the benefit by trying it.

A personal example: I purchased a Mac for home use. I’m a Windows developer, and have been in the Windows world since beginning my career. I didn’t really need to learn OS X; they’re different environments without much overlap, but it definitely broadened my view, and that’s a good thing. Why?

  • I wanted to see and learn another large computing environment.
  • I kept hearing its users’ excitement.
  • iOS development was exclusive to OS X.
  • I’ve not had much Unix exposure at all.
  • I love learning. This change showed me new applications, quirks, and cultures to help shape my view of the industry.

It turns out, magically, that it wasn’t hard. I didn’t have to push something off the mental stack to learn a new OS.

I feel that it’s my job to know what’s out there, and to choose the right/best tool for the job. So your job is personal computing? Yes, choose what fits your needs and what works for you. Your job is software development? You’d damn well better know what’s happening out there, otherwise, you’ll atrophy. You may find yourself on the wrong side of change wave one day.

Continually Evaluate

Does one platform suck today? Keep your eyes open, as it may not in a year from now. Consider phones and tablets: Android’s early detractors had a lot of points, but those problems and pains are basically all solved. iOS didn’t have copy and paste in v1 in 2007. These OSs keep evolving.

Windows has all major dev platforms(Ruby, Python, Node.js, Android and iOS), you can run any OS in Windows Azure, you can RDP to a Windows machine from OS X, .NET runs on *nix. We’re in an age of convergence and the silos are breaking down. Granted, the enterprise will take a while longer to adapt.

All these things evolve; your choices should too.

Read More

They Just Emailed Me My Own Password

Yet Another Account

We’ve all been forced to create an account at some web application. If you’re lucky, the service has identity integration with the popular providers - Facebook, Twitter, Google, Microsoft, and they’re only storing authentication tokens from the 3rd party.

I created an account on the Alberta Health Services (AHS) careers or job board website. I stumbled a few times with my entries into required fields not being correct or to their validator’s liking, but finally succeeded with a nice long complex LastPass-generated password. This is an HR careers website; it seems like the perfect candidate for LinkedIn integration. The form asked for a complicated mashup of account credentials, personal demographic data, and HR job related information.

Careful Choosing Your SASS Vendor

The Alberta Health Services careers site is run or provided by HRSmart. It’s obviously a good choice by AHS not to run their own careers/recruiting/job board: outsource that to a company who’s competent and is focused on providing the right features. No need to re-invent the wheel.

I came away with the conclusion that this software has problems. The problems aren’t insurmountable, but are large enough to provide roadblocks to users.

Ask your vendor: Does your software fit my workflow, or do we as the customer need to fit (and adjust) into yours?

They Emailed Me My Username And Password

Are you kidding me?

There are at least 4 things wrong with passwords being emailed:

  1. You know my password - you shouldn’t know it. You should only know a hash of it. Stop storing your user’s/customer’s passwords in your database tables. I don’t care if it was encrypted or stored in plaintext. There’s no reason to. Your database tables are now full of all your customers’ users’ PII, and alongside sit their email address and password. A future security breach will end up with all these details leaked for the public to see. Your users will be pwned.
  2. Normal people re-use passwords: same passwords all over the web with the same unique identifier - their email address. When your data is leaked, your poor security practice is contributing to the damage that each of your users’ customers’ will feel. With those leaked customer credentials in hand, crackers are likely to find success in accessing other unrelated services.
  3. You transmitted the password in the clear - SMTP isn’t encrypted, and the contents of email are available to anyone listening along the path from your server to my email inbox.
  4. I didn’t ask for this information. Don’t send it.

Ironically, AHS or HRSmart has included this gem in their fine-print:

The counter-argument might be something like:

“our # 1 customer support issue is forgotten passwords, so we’re pre-emptively solving that problem by emailing the user their password in case they forget. It’s also convenient for non-technical users.”

Stop. That’s solving your problem as a software vendor, and probably with limited success. It also creates the problem of leaked credentials. Have you considered making a password reset feature on the login page that does all that for you?

Things this form did correctly:

  • did not ask me to provide n ridiculous security questions and answers for account recovery.
  • showed me which fields were required with visual cues.
  • used SSL. Actually, wait a minute, who cares if the form used SSL? The app emails the credentials in plaintext via SMTP, thereby wasting all the effort and investment on the web.

Solution: For identity, use another authentication service that your user trusts.

Feedback and Response

I tweeted a few times [1] [2] [3] about this poor security practice, but haven’t heard from either of the entities. I’ve emailed HRSmart about it, but haven’t heard back.

I’ll post more here if anyone does respond.

This Form Is Poorly Designed & Asks For Too Much Info Up Front

This is a critique of the form that the software vendor HRSmart provides their customers, who in turn present to their users.

My goal as a visitor when visiting your site is to see what’s there for me, and to take action - apply. Your goal as an application is to let me do that as easily as possible.

The ‘create account’ form really gets in the way. It asks for non-vital information at a vital time, and it asks for LOT of it.

  • Field titles are emphasized with red, which is a nice feature. It lets me know what I’m forced to enter. The problem is that the fields aren’t de-emphasized (i.e. turn to black) when you’ve entered a valid value. So what happens? I submit the form and am punished/shown a myriad of ways that I have screwed up. Gosh, if only I had known what you wanted.
  • Referral Source as required info- why does this matter at this stage? Let a recruiter ask this information in person. This info is nice for the HR department, but isn’t required for the user to get their applications in. You’ve just forced the user through 2 dropdown boxes. Really, you want to know how well your HR recruitment efforts are doing first before actually getting the application? This information is given far too much prominence: its placement first up overtly tells me you care more about your HR recruitment events than you do about my name.
  • Email Address - you JUST had me provide that as my username in step 1. Why are you making me repeat myself?
  • City, Province, Country - you can auto-detect this information. This information is non-vital at this time. I don’t expect to be sent a letter by mail or a knock on the door. Only when we engage in a contract would this information be needed.

**Non-vital **information at a vital time. Collect it later. I know that I’m in the ballpark when I apply for jobs; it’s not in my best interest to apply for jobs that I’m not qualified or interested in. None of this data below is important in capturing the application. These are all candidate/profile data, and not application data. Requiring them here and now is an untimely roadblock:

  • Street Address
  • Postal Code
  • Areas of expertise
  • Job type
  • date available
  • level of education
  • previously employed
  • authorization to work in Canada

I’d expect the counter-argument for this to be:

“if we don’t require people to enter this info, they’ll never give it to us. We would up with incomplete data and have to chase applicants down for data. That’s why we bought this software: it makes people enter their data.”

I can see how that happens if you don’t guide your users appropriately. Forcing them to enter all this information up-front isn’t the way to achieve your goal of complete HR applicant profiles.

Don’t Roadblock Your Users. Guide Them.

The goal: users want to apply for jobs. Everything else is secondary. Take away all the friction and waste in the process and have the user apply for the job. There are secondary goals for the HR team: profile information, contact information, etc.

Consider this set of (simplified) workflow steps:

  1. Create an account with a popular external identity provider. Make it dead simple for users to create their accounts. Extra bonus: you get to read the data that the user has provided you: demographic data that you’re asking them to repeat into your form.

  1. Apply for job.

  2. Fill in the secondary details later. You may nor may not have the policy to actually hold back the application unless the profile is complete. I’m not an HR pro, and yes I can imagine the flood of incomplete profiles. If an application doesn’t fill in their profile and gives a half-hearted effort into the application, that gives you a hint, as an HR pro, to the quality of the applicant.

Read More

A Small Utility App For Splitting Large Text Files

### Utility App For Splitting Large Files A coworker recently asked for a data dump of a table. I put a 350MB .csv file on a shared directory, and for some reason Excel 2007 choked on the file. Hmm, that’s weird. So I wrote a utility to split the large file into smaller files.

Download The App At GitHub

The concept is that sometimes you might have a large file like a web server log, a large csv from some app, or dumped from SSMS. If you actually need to open and view the files (i.e. in Excel), then you might want to have a set of smaller files.

Otherwise, if you’re searching for a needle in that haystack, you’re best off to use a tool like Notepad++ or Sublime Text and their ‘find in files’ capabilities.

It’s a simple console app that will take or ask you for 2 args:

  1. The file you want to split and create multiples from.
  2. The number of new files to create.

Can you run this app from Windows Explorer or via the command line and optionally send arguments to the app.

How Does It Work?

The app basically does this:

  • gets, cleans, and verifies the args from the user.
  • loads the contents of the original file.
  • opens the source file and finds the number of lines.
  • calculates the number of lines to write per split file
  • writes each chunk of the source to the new files

### Line-Based I took the approach of reading files by their lines, rather than their bytes. I took this approach because the files are assumed to be structured as rows/lines, and it’s easier to understand from the user’s point of view. The user likely can answer better the question “how many files would you like?”, rather than “how many MB each file would you like?”.

The app attempts to read all lines from the source file into memory. It calculates the number of lines per file using

linesPerFile = RoundUp(sourceLines / numFiles)

For relatively large files, in my tests during development, the app will internally encounter an OutOfMemoryException when it attempts to read all lines of a 900 MB+ file. In that case, the app will catch and retry by lazy loading the lines while processing.

The files are created by looping from 1 to n. Take x lines from the source file, and write to a new file.

Small Utility. Who Cares?

Nobody. It’s more of an effort to prevent being the ghost who codes. The compiled binary and its source code are available at my LargeFileSplitter GitHub repository.

Read More

Undo Another User's Checkout In TFS 2010

tf undo /workspace:machineName;domain\acct $/TeamProject/Path/To/Files/* /server:yourTfsServerName/TeamProjCollection

Sometimes I’d see an error returned: TF249051: No URL can be found the corresponds to the following server name: Verify that the server name is correct

Try including the full URL to the project collection:

tf undo /workspace:machineName;domain\acct $/TeamProject/Path/To/Files/* /server:http://yourTfsServerName:8080/tfs/TeamProjCollection

Or maybe you’d rather nuke the user’s workspace,

tf workspace /delete machineName;domain\acct /server:http://yourTfsServerName:8080/tfs/TeamProjCollection
Read More

How To Get Around Your Group Policy Blocking Google Chrome Updates

Corporate IT departments love locking down workstations. It’s their job. One great tool is Active Directory Group Policy. IMO, forcing users to stick to one version of a browser is a) lame and b) dangerous.

Rant

Preventing web browser updates is dangerous because these policies are actively preventing security updates from being distributed to the client. Their belief is probably that there’s an application that somehow needs something non-Internet Explorer, but it (wrongly) needs to be locked to the version that the vendor shipped it with. There’s all kinds of you’re doing it wrong in this scenario.

Update failed (error: 7) An error occurred while checking for updates: Google Chrome or Google Chrome Frame cannot be updated due to inconsistent Google Update Group Policy settings. Use the Group Policy Editor to set the update policy override for the Google Chrome Binaries application and try again; see http://goo.gl/uJ9gV for details.

Workaround The Policy

If your workstation has a Group Policy blocking Google Chrome updates, and you have administrative privileges, you can reconfigure your machine to get updates.

1. Install This Group Policy Template

Google makes available a Group Policy Administrative Template for Google Chrome updating. Download it here, too - GoogleUpdate.adm Install it:

  • run gpedit.msc
  • open Computer Configuration / Administrative Templates. Right click, and select Add/Remove Templates.
  • Add the GoogleUpdate.adm template.

Navigate down to Classic Administrative Templates/Google/Google Update.

Modify These Policies

  • Preferences/Auto-update check period override -> Enabled
  • Applications/Update Policy Override Default -> Enabled -> Always Allow Updates
  • Applications/Google Chrome/Allow Installation -> Enabled
  • Applications/Google Chrome/Update Policy Override -> Enabled -> Always Allow Updates
  • Applications/Google Chrome Binaries/Allow Installation -> Enabled
  • Applications/Google Chrome Binaries/Update Policy Override -> Enabled -> Always Allow Updates

2. Tweak The Registry

Your Group Policy may have left a few items in the Registry that need removing.

Inspect and use this Registry editor file - GoogleChromeUpdateEnable.reg - to delete those 2 values.

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Update]
"Update{8A69D345-D564-463C-AFF1-A69D9E530F96}"=-
"Update{8BA986DA-5100-405E-AA35-86F34A02ACBF}"=-

As per Google’s Update fails due to inconsistent Google Update Group Policy settings, you should verify that your configuration is updated correctly:

Start > Run > regedit Find and open HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Update\ Verify that the following new group policy setting is present: Update{4DC8B4CA-1BDA-483E-B5FA-D3C12E15B62D} Delete these values in HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Update :

Update{8A69D345-D564-463C-AFF1-A69D9E530F96} Update{8BA986DA-5100-405E-AA35-86F34A02ACBF}

Policy Sidestepped

Simply relaunch the Chrome About page and it will automatically start the check. You won’t need to restart Windows or log off.

Chrome Update Success

Read More

Computer Replacement Program Promises Old Software

Corporate entities move slowly, with understandable root causes. Here’s a great illustration.

Within, there’s the tacit acknowledgment that slow and old are negative/undesired attributes.

Ironically, the Windows and Office versions they’re promising to deliver are both a generation old.

Read More

User Interface Fail

Quick Rant On Message Boxes

Message boxes are so full of fail. Here’s one from an application that thousands of people use.

1. Using A Message Box

This is a desktop application. My biggest annoyance with this is that the application interrupts what the user. It forces the user to navigate to its only button - the stylish Close button. So it rewards me with the required action of basically making no choices. My acknowledgment of the problem actually means nothing.

Fix: implement and promote a status bar where information is presented. Flash up new information. Keep history. Fade away.

2. Mixed or Confusing Message

The message mixes 2 different issues and doesn’t clarify which is the real issue. Access Denied and Can’t get IP address for server are different problems with different solutions. So which is it? Is it both?

Fix: give the user only useful and correct information in the headline. Option for more.

3. Lack Of Useful Info

OK, so the app is trying to hit a server. What can I do about it? Why is that important to the task I was doing?

Fix: Tell the user what they need to know in terms they can understand.

4. Is This User The Right Person To Notify?

No. The user in this case is a clinical user, like a nurse or registration clerk, who only needs to get their job done. They don’t know jack about servers on the back end. This is the wrong message for this user.

Fix: send this message to the administrative or technical types who can actually act on the problem. Send an email or SMS to that group. Log it.

Give The Right Message To The Right User

I want to improve in this aspect myself, so here’s my new list for creating messaging to user(s) when an exception occurs:

  1. Can I tell whether the current user should be reasonably expected to know that this could happen?
  2. Can the user actually do something about it?
  3. Is there an admin group that I can alert?
  4. Do I need the user to make a choice between Action A or B?
  5. Can I retry silently n times while the user waits a bit?
  6. Don’t force-interrupt the user.
  7. Tell the user in a status area:
    • Something is a blocking them from doing that thing they wanted.
    • The root cause is blocking them from (saving the info/sending the notification/calculating the average/making the change).
    • We retried n times.
    • It’s (not) a problem they can solve.
    • It will (not) block them from Task X.
    • Someone has been alerted at datetimestamp.
    • You should retry when (admins let you know/etc.).
  8. Update the status area when we can detect that the problem has been solved.

Bonus Alert

Read More

Batch Rename Files To Remove Substring From File Name Using PowerShell

Use this PowerShell command to rename all the files in the directory to remove the “StoredProcedure” substring. This substring showed up for me when SQL Server Management Studio’s Generate Scripts utility wrote the files out to disk. The same suffix is present when scripting out other objects - View, Table, etc.

Remove Filename Substrings With PowerShell

Open a command prompt, navigate to the directory your files are in, and run this one-liner to remove those substrings.

PowerShell
Dir *.StoredProcedure.sql | rename-item -newname { $_.name  -replace ".StoredProcedure","" }

Hat tip to Steve’s article on Tweaks.com

Read More

Install a Windows Service using PowerShell

Visual Studio 2012 removed support for Windows Service installer projects (vdproj) as described by Buck Hodges in this MSDN blog post. My usage of this project type was for installing Windows Services. The experience wasn’t satisfying, but rather complicated and ponderous.

Make your life easier by using PowerShell. If you’re holding onto Visual Studio 2010 because of an installer project for a Windows service, you might consider using PowerShell.

This script will:

  • initalize a few strings to be used later during installation
  • convert your plaintext password to a secure string
  • create a security credential for the Windows service to be run under
  • check whether the service exists. if so, stop the service, and delete it.
  • install the service
  • optionally start the service

If you experience an error when running this script like The specified service has been marked for deletion, I’ve found that closing any instances of services.msc has solved that issue.

Read More

Don't Call Them Bugs

Bugs vs. Defects

I really don’t like the word ‘bug’ when describing a piece of (missing) functionality that doesn’t work correctly or has unintended consequences. I much prefer the more accurate word: defects. If you use ‘bugs’, you’re doing yourself, your projects, and your customers a disservice.

It’s a Mindset

The word ‘bug’ brings a cutesy picture of a ladybug or maybe a little grasshopper, or maybe an uglier thing like a house fly or cockroach. Get real and be professional: these are defects. Don’t trivialize your work and its consequences. Call it what it is. Imagine you buy a car or a house (here we go with the metaphors) – how would you feel if the driver’s door had a flaw whereby the handle or lever would open the trunk on each odd turn, and open the driver’s door on each even turn? What about a defect where your GPS took you to the wrong destination? How frustrating or expensive would it be, for you as the customer, to work around that? If your software doesn’t do what you intended/written, then you haven’t tested enough. Shipping that product is, by and large, up to you. If your product contains untested/unpredictable behaviour, that’s all on you!

Not Helping

Yes, it’s a nice story about Grace Hopper and the moth at Harvard. Of course, it’s an easy feel-better-about-yourself term that’s used. However, it becomes an uphill battle for those professionals who care about terminology when there are products called Bugzilla and FogBugz in the bug-tracking line of programs. I use both, and both are great at what they do. Of course, the term ‘bug’ is well engrained into our minds and language, and has even crept into the language of the non-programmer.

Personal Software Process

This reminds me of one of the textbooks at BCIT’s CST program. It’s the Personal Software Process by Watts S. Humphrey. Watts is incredibly focused on software quality. One distinct lesson I got from that particular course was the concept of constantly recording and measuring. Defects per 1000 lines of code (KLOC). The other theme, perhaps in the book, or with that particular instructor, was this idea of ‘defects’, and not ‘bugs’.

Own Your Defects

I’ve come to this conclusion for myself when thinking about flaws in software:

Don’t trivialize those flaws with your words, own them. Call them what they are – flaws and defects in your software

Read More

Don't Micromanage Your Latte (Start Outsourcing Your Loops)

Consider your behaviour and desires as a customer when you visit:

  • your favourite local coffee shop - you specify what you’d like, rather than how it’s made. You care about the outcome, but rarely care about the process/sequences/steps that are followed as it’s being constructed.
  • a Subway/Quiznos shop - you care equally about the outcome and the construction. Meats, veg, bread type, order of placement of each, precise amount of mustard, pickles, etc.

barista

The difference here is: you aren’t instructing the coffee barista on how to steam the milk, or when to start brewing the espresso. You don’t remind them that their level of ground beans is getting low, or where to store the milk. You are happy to assume they know their job best, and order of operations is properly under their control. They have their efficiencies to care about, and you’re happy to let them manage that. Consider now your desires as a programmer when your task is to find customers with a condition. Let’s say we want to use this contrived example:

Find the customers whose account balance owing is over $5,000. Find the youngest customer in that set.

The Sandwich Model of Algorithms

//find all customers with the appropriate account balance      
var owingOver5000 = new List();
foreach (Customer c in myCustomers)
{
  if (c.TotalAmountOwing > 5000)
  {
    owingOver5000.Add(c);
  }
} 

var youngestCust;
DateTime youngestBirthdate=null; 

foreach(Customer c in owingOver5000)
{
   //initialize on the first go-round; many ways to do this.
   if (youngestBirthdate==null)
   {
      youngestBirthdate=c.BirthDate;
   } 

   //find the youngest by their age
   if (c.Birthdate < youngestBirthdate) //pretend nobody has the same birthdate ;)
   {
      youngestCust = c;
   }
}
//you now have youngestCust populated (in most cases) 

Very fine-grain operations are explicitly laid out by the developer, and execution path follows exactly what the developer wrote. Defects and all! The number of defects is up to you! ### The Coffee Shop Model of Algorithms Consider now the Coffee Shop model of this algorithm. We’ll use LINQ.

var youngestCust = myCustomers
                  .Where(c=>c.TotalAmountOwing > 5000)                    
                  .OrderBy(c=>c.BirthDate)                    
                  .SingleOrDefault(); 

It should be obvious by now, if it wasn’t at the start: the LINQ extension methods are doing all the looping for you, and taking care of all the small bits and housekeeping. **You as the customer, don’t want to care about how it’s found, but rather, you declare what you want. **

Outsource Your Loops

I hate to steal/reblog Eric Lippert’s thought on this, but it’s worth saying once more in a different way:

Avoid loops. They’re almost becoming a code smell. Let built-in methods and functionality do the boring non-value added logic for you.

You should be focused on YOUR business logic or end-goals (i.e. eating your sandwich and drinking your coffee), and less on syntax + language constructs. Take advantage of more declarative constructs provided in your language/framework. LINQ is a perfect example of this.

this post is a mashup of Luca Bolognese’s PDC 2008 F# metaphor and Eric Lippert’s post on loops. Apologies to both!

Read More

Why I Won’t Be Re-Subscribing to SQL Server Magazine

sqlmag

I’ve had a subscription myself for 2 years now to SQL Server Magazine. They’re one of many of Penton Media’s magazines, along with the Windows IT Pro site. I like paper mags for portability’s sake – the beach, roadtrips, etc. The same goes for Code Magazine and MSDN Magazine.

Dead Tree Edition

I’ve often asked myself if a paper-magazine-delivered-to-your-door makes any sense these days. Rather, does it make sense to me? Obviously the magazine industry has been in trouble, along with every other industry, since the internet moved their cheese. Along with that analysis is the subscription cost of the magazine, and its (perceived) benefits.

  • Can I get this content, or similar, or better in other places?
  • Why do I need the paper version?
  • Yes, they let you into their walled garden of SQL Server content when you’re a subscriber, but…
  • Is the content actually useful to me?
Yays Nays
  • they've got Itzik Ben-Gan
  • a perception of trust
  • content is NOT automatically out-of-date on arrival </td>
  • too many ads & ads are not relevant to me
  • yes, ads are their business!
  • content not always suited to me
  • far too many other free resources on the internet </td> </tr> </table> ### SqlMag's Top 10 IT Websites – December 2009 edition Flipping through the Dec. 2009 copy, I saw something that had me questioning SqlMag's quality and relevance. The headline was straightforward – Your Top 10 Favourite IT Websites. The "Your" bit seems to indicate that the readers had a vote in it, or... something. The list of sites, though, got me a bit suspicious. **More than a few questionable choices here**, and it was just too much to not say something. ![sql-server-magazine-top-10-december-2009](http://i.imgur.com/ezZRGpL.png) **10.** Google – this was waaay to obvious to be on anyone's serious list of IT websites. Can you even call this an IT website?! It's certainly got lots of content indexed, but questionable whether it's an actual IT website. If they'd mentioned [Google Code](http://code.google.com/), then I could see where they were headed. **9.** Major Geeks – this stood out like a sore thumb. Isn't this a shareware/utilities download site? I can't remember the last time I specifically visited the site on my own desire, but it was probably for a copy of WinZip 5.0 in 1998. **8.** TechNet – sure. A solid and stable resource put out by a major 1st party vendor. Lots and lots of technical info on anything Microsoft that you're administering. **7.** The Register – whaaa? This site is an anti-Microsoft FUD machine. Take the worst of Britain's tabloid industry, combine with dash of tech news, and you've got "the Reg". A terrible pick! **6.** ServerFault – now you're talking. A Q&A site for system administrators and IT professionals that's free. Perfect, a well deserved site. **5.** Slashdot – hardly a tech resource, in my opinion. Call it Tech News 1.0, run by editors with their 'base'. It's full of anti-Microsoft FUD, this time with their Borg.gif adorning any Microsoft story. That, to me, shows exactly the level of professionalism the site operates with. A terrible pick for your Top 10. I'm debating whether it's Slashdot or The Register who use the term 'M$' more often than the other. Another sore thumb of a pick. Aside – who really thinks that term is funny? **4.** Windows IT Pro – the readers submitted the parent company's flagship website as a Favourite IT Website? Something doesn't smell right here. **3.** GPAnswers – admittedly I do NOT or haven't visited this site. Certainly Group Policy is a major set of tools to help set rules around A.D and the computers and users within. GPAnswers' forums are running on vBulletin, and is run by a Group Policy MVP. **2.** CodeProject – a clearinghouse for articles and how-to projects. A worthwhile consideration, but certainly not # 2 on my list. **1.** Experts Exchange – **ARE YOU FREAKING KIDDING ME?!** You've lost your mind, SqlMag! This site is the absolute scummiest Q&A site on the planet. Their entire business model is built around **CLOAKING** their site, and **TRICKING THEIR VISITORS** into paying. (Yes, you can see the full set of user answers when you scroll 8 pages down!) **SqlMag, bad choice for your # 1.** Even if this list WAS user-generated, **which I doubt**, any list that includes Experts Exchange loses credibility, in my eyes. When I've asked any developer about their experience in finding good answers from that site, roundly I've heard nothing but bad things. I've even thought this piece was done by an intern, or some writer's little cousin's brother. ### My Own Top 10 Tech Sites I can't sit and cast my personal judgement on their picks for Top 10. Let's see how hard it will be for me, maybe it's tough! * StackOverflow & ServerFault * [Channel 9](http://channel9.msdn.com/) * [CodePlex](http://www.codeplex.com/) * Your RSS reader + your fav blogs. Take it to 10+, or as far as you like. Mine include developers and leaders in the community – [Phil Haack](http://haacked.com/), [Jon Skeet](http://msmvps.com/blogs/jon_skeet/Default.aspx), Scott Hanselman, Scott Gu, [Brent Ozar](http://www.brentozar.com/), and more. You don't need 10 to get a good list. [StackOverflow is full of excellent questions and answers on EVERYTHING a developer needs](http://www.stackoverflow.com). The amount of quality answers and [quality answerers](http://stackoverflow.com/users) are enough for a top 50 list. ### Online Really Has Moved Their Cheese I can't bring myself to pay any more money for the mag. That really sucks for the people who work on the mag, and that industry in general but they've got incredible opportunities to redirect their efforts on the web. The shift in the print industry has been obvious for years, hopefully they can improve their website to keep users/readers coming back. I believe that established industries need to be more agile or nimble in their ability to change with technology. If their strategy is to continue to grab technical readers, and SQL Server being one of those topics/content areas, then they really should ask themselves: **"What do database professionals need/want?"** Is it education, how-to, one-way articles, Q&A, user-created content, interaction with your authors... there are lots of ideas out there.
Read More

Website Mockups Done Incredibly Easy

Recently I found myself with the desire to start a new site. Rather than jumping into Visual Studio headfirst, I sat down and thought about how to start.

  • What technology am I going to use? I always jump to this in the Top 5 Things I Consider, and I know it’s not terribly important, and I know I should be thinking of other things. I always know the answer to this question, though.
  • What features should be on this site? Getting warmer. I typically write down the main features of the site goals of the user in point form. I am trying to train myself with this way of thinking: The site doesn’t matter without the users wanting it to. Don’t think about features, think about the user’s goals.
  • Monetize? Sure, in some small way, but that’s not the main point of this new site, though.
  • How will the user see and contribute to the site? Most important and relevant question. Piss off your users, or make it too complicated, or too lame, and you’ll lose visitors.

The typical process I go through when faced with a new idea and a fresh start is to pull out the pad of paper and a pen. I am not a design genius, like most software developers, but try my best to grok user experience. Sometimes I fall flat, sometimes I look back at previous work and throw up a little, and sometimes I am happy. Most of the time, I think I’m just lucky that I don’t have too much scrutiny on my layouts and flow in my corporate line of business web apps.

mockingbird

I found this site recommended a few times on Stack Overflow. It’s almost self explanatory to how the site works.

mockingbird mockup of youtube

Draggy, Droppy, Stretchy, Copy There’s a palette on the left, and a design surface on the right. Make a page for each of the user-goals. This isn’t going to be set in stone, and things will change. Arrange the page elements on the design surface as you like. Some neat features or user experiences I noticed:

  • Labels/text scale nicely as you grab the corner and stretch. It (smartly) figures out when to bold and/or increase the size of your label. Other elements scale beautifully, especially those that are icons – calendar, pie chart, Twitter logo, etc.
  • Double click any element to modify its text or contents – just as you’d expect.
  • Configuring a Linkbar was easy. Mockingbird really nailed this element.
  • You’re able to associate links with pages that you’ve already defined. Just drag the Page on the left onto a form element.
  • Elements are very generic – no Windows or Mac bias. It’s just a rounded rectangle representing a button.
  • There are lots of great elements that trigger new ideas. Seeing the Map and the Banner ad were great. Aside - tag clouds – meh… does anyone really like and use the tag cloud in the real world? I know Stack Overflow has one, but I have never used it.
  • The overall feel or experience is very much like a sketch. Positioning elements is made easier thanks to the horizontal and vertical alignment bars. Nice touch!

mockingbird thumbnail grid

Go Try It Just jump into mockingbird. You don’t need to create a mockingbird account to try; only to save and retrieve your designs. It’s free! I wonder what platform mockingbird is running? Oh wait, it’s called Cappuccino (learn more) and it’s

implemented using a new programming language called Objective-J, which is modeled after Objective-C and built entirely on top of JavaScript

Very cool user experience! Kudos to the mockingbird developers Saikat and Sheena.

Read More

Randomizing Order of Your List or Array using LINQ

Perhaps you’ve got a collection of objects that you want ordered/sorted. I recently did. Perhaps you want them displayed in a non-deterministic manner or sorted randomly each time you write to an HTML view or otherwise consume that collection.

Randomize()

So imagine we’re working with a List of Customers.

data

Problem being, we don’t really have anything non-deterministic to work with. We could do this in SQL:

  SELECT *,
        (sin(Cust.ID * rand())) AS R
   FROM Cust
   ORDER BY R;

This though, puts your presentation logic in your data tier. Maybe you don’t care, maybe it’s not a big price to pay. I wanted to take it up to the presentation tier and do this sorting right before binding or writing these objects to the page. Taking the lead from Bruno Silva, I immediately realized this was the seed of the algorithm that I was looking for. Here’s my modification of Bruno’s algorithm:

var r = new Random();
customers.OrderBy(x=>(r.Next())); 

The call to r.Next() is obviously the key. Each object as it is evaluated in the OrderBy() will get a new random number associated with it. Celebrate good times!

Turn It Into an Extension Method

I haven’t written much here about how much I love extension methods in the .NET CLR 3.0. I can’t count the number of times that I’ve created a static ‘utility’ method that did something like this:

  • take in an object of some kind.
  • modify it.
  • return it back to the caller.
  • caller reassigns the return back into the object that it calls.

Rather than the above, I’ve created an extension method that I’ll be able to call whenever for any collection that implements IEnumerable<t>. It’s trivial then to write the extension method for IQueryable.

ExtensionMethod I’ve submitted my Randomize() extension method over at ExtensionMethod.net. What a great site, by the way, kudos to those guys for taking user submissions and helping grow the use of this feature in .NET.

public static IEnumerable<t> Randomize<t>(this IEnumerable<t> target){       
    var r = new Random();
    return target.OrderBy(x=>(r.Next()));
}   
Read More

Virtualization – The Developer's Desktop Machine

As developers, we usually have these things (file most of these under ‘duh!’):

  • well powered desktop machines. OK, mostly desktop. Maybe you’ve got a big honkin’ laptop-with-great-specs-but-called-a-desktop-replacement.
  • a need to deploy/test/mess about in another machine. You don’t want the cruft of your development machine to get in the way of the operation of your target environment.
  • the desire to test out a new tool: a beta/RC OS, a new beta of Visual Studio, a new server product (SVN), a community technology preview (CTP), or some other package that you just don’t want to push to your current Dev server. Hands up if you even have a ‘Dev’ server other than your machine?

In the last 4 years, I’ve usually run into something dev-related that I really wanted to get my hands dirty with. Yes, the shiny-object developer syndrome. The old way was to install that piece on your Dev machine. Months would go by, and you’d (theoretically) dirty up your registry, and contribute to the eventual slowdown of your Windows install. The logical solution at that point would be to format and repave your Dev machine. Looking forward, and a bit contrary to the point I was just making, I don’t get the sense that Windows 7 will succumb to the bloat and eventual slowdown. That said, it doesn’t invalidate the need & convenience of virtualization.

Free Options

Enter the full on assault of free options for virtualizing operating environments. We really have an embarrassment of riches. Perhaps I am late to the party, but the freeness of the VM solutions is jolting:

My Fave Virtualization Platforms

Doubtless you know the benefits of running VMs. Lower TCO in terms of number of physical metal boxes, lower cost of electricity to power and cool, etc. For me, it’s the ability to RDP into a new machine on the ‘network’ and install/configure/test whatever I am working on. The ability to mount ISOs for OS and app installation is just another kick ass speed benefit. Even better are the instances where you can download a pre-configured VM. Check out ALMWorks’ turnkey Bugzilla and Subversion virtual machines.

Sun VirtualBox

sun_virtualBox Great product here. Easy creation of virtual hard drive disks, mount ISOs, and a nice looking application overall. Its config files are all XML, and messing around with file locations is easy. The one thing about VirtualBox is that it doesn’t run VMs out of a single window, but rather opens a new window in your (in Windows anyway) taskbar. This is different from VMWare, likely due, in part, to VMware being headless. My big want out of VirtualBox is the ability to run headless. It’s the one big feature that would allow me to adopt it as my one and only VM product on the Developer’s machine. Sun releases this product for Windows, Mac and Linux hosts, and I think they’ve done a great job. The release frequency is stunning! Keep up the good work, Sun! 8/10

VMWare Server

vmware-server I first got into VMWare Server as I was encouraged to run Windows 2003 on the job. Previously I had only used the Microsoft Virtual PC products, which were decent. The killer bit that won me over on VMWare Server was that it is HEADLESS. The machines can startup and shutdown in parallel with your host OS. Excellent feature for those who would expect those services to be up 100% of the time that your dev machine is (Dev or Test SQL Server, Active Directory services, build machine, etc). This is basically gives you the Ron Popeil method of running additional machines: set it, and forget it (curse you, 90’s infomercials). The kicker for me today is that VMWare does NOT make Windows 7 64-bit signed drivers. Absolute killer for me during the Release Candidate of Windows 7, and still today at Win 7’s release. Reading the forums and related searches, it appears there were hacks for Vista 64, but the important part here is that Microsoft REQUIRES signed drivers for 64 bit systems today, starting with windows 7. VMWare, please! Get those signed drivers out! There’s one thing about the latest releases of VMWare Server that gets me. The 1.x versions all had built in console on the host where you defined/configured/started/stopped your VMs. It was a nice presentation with console UI elements coming in an .exe. The 2.x releases have moved to a web-based console. I much preferred the 1.x presentation. I’m still using in on the job with Win7 32-bit, and it works well! 9/10

Microsoft Virtual PC

virtual_pc This was my first foray into VM’ing. I think XP was the modern OS at the time, and it was a great introduction to testing out changes or running other apps that I didn’t want on my machine. The kicker again here was that the system was not headless. Today, you’ll run Virtual PC 2007 on XP or Vista hosts, while Win7 has the ‘year’ moniker dropped. The biggest one was a Toshiba voicemail/PBX management app that ONLY ran on XP machines that were NOT on a domain. What a pile of disappointment. The phone technician who installed the system had to be called out every time the company wanted to adjust the phone system (change a number, a name/label on the phone’s display, or any options on the phone system overall). It turns out he simply was running this web app on IIS on his laptop. He just needed to tweak the IP address in the app to match the customer’s phone system. One day I asked him how to DIY, and he suggested to run this web app on my machine. It was a perfect candidate for virtualization. Thanks MS Virtual PC 2004! 7/10

VMWare Server ESXi

vmware-esxi This is a free hypervisor product. Really it’s the entry-level product within the hypervisor line. It allows you to deploy multiple VMs on a machine and incur just a small performance penalty for the host OS. The licensing cost is zero. It runs a Linux kernel, and its footprint is ~32MB! Hmmm… could you install that to a USB thumbdrive? So the real benefit here is that you don’t have to worry about the license for your host, nor the overhead of the host. My next project will be to take my 4 virtual machines and deploy them to a machine running ESXi. How cool is it that VMWare makes it free? Your only limitation at this point is the amount of RAM and disk space (hardly a limitation today at 7 cents per GB on a SATA drive). The product is a downloadable ISO. You boot into its setup app, and from then on, you use the console application to communicate with it.

Read More

LINQ To SQL Changes in Visual Studio 2010

Damien Guard was nice enough to blog about changes coming to L2S in VS 2010. Rather, the changes are coming in the .NET Framework 4.0. The whole rumour within the development community/blogs about “LINQ To SQL is being unsupported, Entity Framework is the new coolness” was just plain wrong, I believe. Microsoft is too big, and has too many projects on the go. They’ll gauge the momentum of both technologies. Have you heard Damien as a guest on Herding Code Episode 50? In this blog post, Damien says that the focus for Microsoft will be on EF, and that’s fine. L2S is definitely not dead! I still believe that if you’re needing an ORM, and working with SQL Server, then use LINQ To SQL. I’ve tried EF, and it worked fine. I’ve ended up with 4 projects using L2S, and haven’t found any real need for EF.

Welcome Defect Fixes in coming in .NET 4.0 for Linq-To-Sql

For me, the most interesting changes within Damien’s post are:

  • Contains() with enums automatically casts to int or string depending on column type**
  • String.StartsWith(), EndsWith() and Contains() now correctly handles ~ in the search string (regular & compiled queries) - Here’s a small defect. I’ve not needed to search for tildes very much, but I decided to give it a shot just in case! It’s true, the behavior is just as described!

tilde2.png

tilde_thumb.png

Now detects multiple active result sets (MARS) better - I am not the heaviest user of L2S, and definitely haven’t needed to specify MARS myself. Here’s the MultipleActiveResultSets defect as reported on MS Connect. It’s a simple issue where the connection string property MultipleActiveResultSets is only picked up when CamelCased exactly as shown above. Any deviation will ignore the option!

DeleteDatabase no longer fails with case-sensitive database servers - Interesting that this functionality even exists. I had to research this method - DataContext.DeleteDatabase(). I can’t recall actually seeing it in the IntelliSense method list, but indeed it’s there! Most blogpost or articles that I read in that 5 minute span were talking about using this method for tear-down during “Unit Testing”. I’d call that integration testing, and ill-informed as well. Unit tests should not include databases!

DeleteDatabase.png

varchar(1) now correctly maps to string and not char - This one has bitten me before. The column was called Gender. Of course it was storing M, F, T, or U for unknown. The core of the problem was that some rows were having a blank stored in this field, rather than null. StackOverflow to the rescue! http://stackoverflow.com/questions/1190328/linq-to-sql-exception-string-must-be-exactly-one-character-long. After some thought, I’d agree that storing this value as char(1) would be semantically more correct, more performant, and consume just one byte per tuple.

varchar.png lenzero.png Decimal precision and scale are now emitted correctly in the DbType attributes for stored procedures & computed columns - I couldn’t reproduce this defect, and perhaps I misunderstood. I defined a decimal(18,5) attribute on the table, and L2S brought it back without any problems. Then I realized the key to this defect was probably the ‘computed’ bit. So I went and created a simple decimal return type. I ran the query, and still no defect.

computed.png decimalsOK.png

Then I clued in - the defect was under the LINQ To SQL Designer heading. So upon further inspection, here’s the defect in the myL2S.designer.cs. The return type is calculated as decimal(0,0). Ouch! :)

decimalzero_thumb.png

Foreign key changes will be picked up when bringing tables back into the designer without a restart - This defect has hit me a few times as well. It appears as such:

  • Edit a FK in SQL Server. If you’ve got an open L2S file, deleting + dragging and dropping those tables back onto the L2S surface DO NOT show your FK changes.
  • Clicking the Refresh button in Server Explorer doesn’t help.
  • The only solution is to close your L2S file, and re-open.

Changing a FK for a table and re-dragging it to the designer surface will show new FK’s - this is very much related to the item above. Opening a DBML file no longer causes it to be checked out of source control - this has appeared to me a few times. Simply opening/viewing the L2S file creates a ‘check out’. Nothing earth shattering here, and glad to see this is fixed. Can edit the return value type of unidentified stored procedure types - This feature is great! It’s very helpful when you’ve got a sproc that shapes data just the exact perfect way that you’d like to show on a custom report. Perhaps you’re binding to an asp:GridView or including as part of an MVC FormViewModel. The normal course of action is:

  • Create your proc to shape your data as you likeSproc-To-Custom-Object
  • Drag a proc onto the Methods section of the L2S designer.
  • Try change its return type to a class you’ve created solely for the purpose of binding
  • Oops, it’s locked! locked

The work-around is a bit time-consuming. You have to:

  • open up the myL2S.designer.cs file
  • find your method marked with the attribute containing your proc name
  • replace the default return type int with ISingleResult<t> - in my simple example here, it’s ISingleResult<customerreport>

modifydbml_thumb.png The frustrating bit here is that this behavior isn’t predictable (to me at least). I CONSTANTLY have to go through this process to properly set the return type of 2 procs in one particular project. This defect fix in particular by Microsoft will be welcome!

Read More

Using System.Reflection to output your POCO object's property names + values

On two recent projects, I’ve had the need to write out the properties of multiple custom entities. The example here will be around the venerable `Customer class. Let’s pretend that a requirement would be to send an email each time a customer makes an order to a support rep in your company. Yes, we’ll be logging the order to the database, but the value-add here is that the recipient of the email will receive a link to the order, plus all the details of the customer and order included in the email. So we need to take an object, iterate through its properties and their values, and put into a string for the body of an email. The first thought might be: “just loop through each property and write it to the email body”. That’s fine, but as soon as you add a property in the future, you need to remember to change the emailing code. That’s not helpful for the developer needing to maintain this application. Let’s look at the Reflection namespace in the .NET framework.

Reflection in .NET

Reflection lets you programmatically find out information about types in your assemblies at runtime. Using classes in the System.Reflection namespace, you can learn details of a class’s methods & properties. In the topic at hand, we’re interested in the names, values, and datatypes of properties. You could possibly use reflection to get info at runtime about a method’s parameters.

Override Customer.ToString()

The Customer class needs its own ToString() method overridden. Here’s the class:

using System.Reflection; 
using System.Text;

public class Customer
{
    public string Name{ get; set; } 
    public string Email { get; set; }
    public string Address { get; set; } 
    public string City { get; set; } 
    public string State { get; set; } 
    public DateTime JoinDate { get; set; }
    public override string ToString() 
    {
        var personString = new StringBuilder(); 
        foreach (PropertyInfo pi in this.GetType().GetProperties()) {        
            personString.AppendLine(string.Format("{0}: {1}", pi.Name, pi.GetValue(this,null))); 
        } 
        return personString.ToString(); 
    }
}

var cust = new Customer{ Name = "Mike", Email = "some@dot.com", HomeAddress = "1 Some Street",State = "ZZ"};
string customerOuput = cust.ToString(); //now put this in your email body!
Console.WriteLine(customerOutput);

/* Console will show:     
        Name: Mike
        Email: some@dot.com      
        HomeAddress: 1 Some Street      
        City:       
        State: ZZ      
        JoinDate:       
*/  

Consider

  • collections aren’t handled well.
  • complex datatypes aren’t either.
  • consider customizing your implementation to include special formatting for DateTimes.
  • what happens when you want the output to have a well-formatted output for a property like HomeAddress. We’d probably want it to show as “Home Address”.
  • consider handling null values better than writing "null".

Wrap Up

I know this will save developers time in two ways:

  • Not having to iterate manually through X properties to build your string for email. That’ll scale depending on the number of properties and your adeptness at Ctrl-C, Ctrl-V.
  • When you add a new simple property, you will NOT have to adjust anything for it to show in the .ToString() method.

I’m always interested to see how devs are using System.Reflection.

Read More

Serializing Your POCO Objects to String then to XmlDocument

Today’s Dreaded Exception

Data at the root level is invalid.

Found a problem recently when serializing a custom object. Here’s what I was working with. Warning: this is the bathroom wall of code version. Please do not copy/paste this into production.

public static XmlDocument SerializeToXmlDoc(Object obj)
{
    try
    {
        XmlDocument xmlDoc = new XmlDocument();
        string xmlString = SerializeIt(obj);
        xmlDoc.LoadXml(xmlString);
        return xmlDoc;
    }
    catch (Exception e) { return null; }
}

public static string SerializeIt(Object obj)
{
    try
    {
        MemoryStream memoryStream = new MemoryStream();
        XmlSerializer xs = new XmlSerializer(obj.GetType());
        XmlTextWriter xmlTextWriter = new XmlTextWriter(memoryStream, Encoding.UTF8);
        xs.Serialize(xmlTextWriter, obj);

        memoryStream = (MemoryStream)xmlTextWriter.BaseStream;

        return UTF8ByteArrayToString(memoryStream.ToArray());                
    }
    catch (Exception e) { return null; }
}

private static string UTF8ByteArrayToString(byte[] characters)
{                      
    return new UTF8Encoding().GetString(characters);            
}

The problem that was that as I passed one of my custom objects to it, I’d get this exception:

Data at the root level is invalid. Line 1, position 1.

Fine, root level… got it. Let’s take a peek at what’s actually trying to be loaded.

Wait, What The?

Here’s the kicker: there was a funny null/something character at the beginning of the string, and therefore, my XmlDoc couldn’t successfully execute the LoadXml method. Confessional: I scraped this method from the interwebs and its bathroom wall of code. I tweaked it to suit my needs. I figured it was ready for ANY kind of POCO object. Guess not. I have hit the 10% case where it didn’t work well. Well, let’s fire up the Text Visualizer and figure out why that string isn’t loading into an XmlDocument properly.

Text Visualizer Visual Studio xml string

Hmm. That’s funny. What is that?! Turning to the Immediate Window didn’t give any real answers as to the value of that unknown/bad character.

immediate window Visual Studio 2008

Hmm. Looks blank. I then copied that value right from the Immediate Window, pasted into Notepad, it comes out as a question mark. Nice! Here’s the direct paste:

?xmlString.Substring(0,1)
>"?"

Solved!

You could go down all kinds of kludgey roads and try to replace the first character if it’s not angle bracket <, or try and trim null chars, etc. turns out that a Stringwriter will do the job well in this case. No more null character leading to a failed load of the string. (via the bathroom wall of code at http://asp.net2.aspfaq.com/xml-serialization/simple-serialization.html)

public static string  SerializeIt(Object obj) 
{ 
    try {             
            var xmlSer = new System.Xml.Serialization.XmlSerializer(obj.GetType());
            using (var sw = new StringWriter())
            {
                xmlSer.Serialize(sw, obj);                 
                return sw.GetStringBuilder().ToString();
            } 
    }      
    catch (Exception e) {  return null; }      
} 

The MemoryStream algorithm worked well for some of my POCO objects, but not for another.

Read More

SQL Scripts - Countries

A new project has me writing up the same old country/state/province reference tables. My feeling is that these static (fairly static) entities should be normalized and referenced by foreign key. I had asked a StackOverflow question on whether other developers had this prebuilt set of country/state/province create scripts in their toolbelt.

Create The Schema

Create your Country and State tables with this SQL script. As always, name the State table whatever you like (ProvState, tblState, whatever). Some folks don’t like table names to be the same as reserved keywords.

country_state_erd

Populating

Here is a collection of insert scripts to get the data populated quickly for you.

Have Fun

Some developers go further down the normalization path by creating a City table, but I usually pass on that. As always, that decision is largely dependant on the problem domain or task at hand. Yippee for me - that’ll be fun trying to reconcile data when someone enters their city location incorrectly as “St. Paul” / “Tuscon” / “Pittsburg” instead of Saint Paul / Tucson / Pittsburgh. If you would like to contribute an insert script or two for a country that you would like to see here, just tweet at me with your script!

Read More

Visual Studio keyboard shortcut - Debug Test in Current Context

When hammering away at a unit test, I often found myself wanting to test the current unit test and its associated code. I of course wanted to use the debugger, break points, inspect values, use the Watch tool, etc. ### The First Iteration

  • Put the cursor in the test method.
  • Go to Visual Studio IDE’s menu
  • Click Tools - Run - Tests in Current Context
  • Debug as per normal

ide

Do It Smarter

Then I remembered that you could put the shortcuts in the right click context menu. Ah ha!

rightclick

You can see where this is going if you paid attention to the first iteration, as well as the title of this blog post. Taking your hands off the keyboard to move to the mouse adds so much more wasted time in development. File this under “what were you thinking?”.

Smartest

Now it’s all keyboard goodness:

  • Cursor in the test method
  • use a cute mnemonic in your head when you’re typing in this combo. Re Test
  • Ctrl-R, Ctrl-T

What is YOUR favorite Visual Studio keyboard shortcut?

Read More

Mirth Converting HL7 v3 XML messages to HL7 v2

The Goal: Have your HL7 v3 converted by Mirth to a specified HL7 v2 message. ### The Prerequisites

  • Using Mirth 1.8+. This may work with previous versions, but this solution hasn’t been tested on anything but 1.8.
  • You know what your HL7 XML looks like. You have a sample message available.
  • You know what you want the HL7 v2 to look like.
  • You may be expecting repeating segments to be converted. That’s OK, it will be covered here.

Let’s Do It

-1. Login to your Mirth Administrator, create a new channel.

  • Give your channel a name
  • Ensure the incoming datatype is set to HL7 v3
  • All the other defaults are OK
  • Save

1

-2. Switch to the Source tab.

  • Ensure your Listener port is something unique.
  • All the other defaults are OK

1b

-3. Switch to the Destinations tab.

  • Give the first Destination a good name
  • Change the connector type to anything
  • Save

3

-4. Switch to the “Edit Transformer” menu option

  • Click “Add New Step”
  • Change the Type to “JavaScript”. This walkthrough will take the JavaScript route as more code/mapping can be fit into one window. The other mappers may fit your style better!
  • Give it an appropriate name, hit Enter

4

4b

-5. On the top right pane, click the Message Templates tab.

  • This is where your XML HL7 v3 template will go. If you don’t have a template, you can make one up for your experimental/development purposes!
  • Find your XML HL7 v3 template, and paste it here.

5 5b

-6. On the bottom right pane, find the Outbound Message Template. It will likely default to HL7 v3.0.

  • Ensure the data type dropdown shows HL7 v2.x.
  • Find your HL7 v2 template, and paste it here. Keep any default values that you have, but go through and prefix them with something unique to remind yourself to ensure that value is mapped from the HL7 v3 message. That’ll help identify any mapping that has been missed.

6 6b

-7. Click Message Trees

  • You’ll see a tree representation of both your messages.

7

-8. Now it’s time to match up the elements that you want to go from the v3 to the v2. Draggy-droppy time! Repeat this for EACH data field that you want moved from the v3 to the v2.

  • Drill down into the Outbound Message Template. Find the v2 element that you want filled. (e.g. Patient Last Name at PID 5.1)
  • Pick the Green dot icon. Drag it over and drop onto to the JavaScript window.
  • You’ll now have something like this in the JavaScript code window:

    tmp['PID']['PID.5']['PID.5.1']

  • Type the equals sign at the end of this line. We’re going to assign this value to something after the next step. Now you’ll have this:

    tmp['PID']['PID.5']['PID.5.1'] =

  • Go back to the Inbound Message Template Tree. Drill down into where the Patient Last Name is at.
  • Drag and drop that Green dot icon over to the JavaScript window. Drop it off after the equals sign.
  • You should now have something like this code in your JavaScript window:

    tmp['PID']['PID.5']['PID.5.1']=msg['controlActProcess']['subject']['target']['identifiedPerson']['name']['family'].toString()

  • Congrats, you’ve just mapped one property from v3 to v2.
  • Repeat the above step as necessary for all the fields that you need transformed from v3 to v2.
  • You code any steps by hand in JavaScript now that you have the basic syntax down. I’d suggest creating new Transformer steps for each v2 segment. It will help you find/fix problems that you find for a particular segment if/when they appear. It follows the idea of modularity.

8 8b

-9. Use this sample channel for the full solution.

-10. Deploy your channel and use the Send Message command.

  • Copy/paste a V3 message with all the values filled in.
  • Click Send.
  • In the “Encoded Message” tab of the Destination Connector, you’ll see your output HL7 v2.
Read More