WELCOME

This blog is where I post about my consulting work with Microsoft Technologies, and other random tidbits that don't fit in my Photo Blog or my Iraq Blog.

Monday, June 4, 2012

Updating ScrappyDB Membership Provider to support Amazon DynamoDB

For anyone that is interested in using the ScrappyDB ASP.NET Membership Provider for Amazon SimpleDB...

I would like to announce that I am actively developing an updated version of ScrappyDB that will also support Amazon DynamoDB as a backend. Functionality should be identical, and it should be transparent which backend you are using.

The primary differences are pricing and scalability:
  • For a low volume site SimpleDB should be cheaper.
  • For a high volume site DynamoDB should provide near infinite scalability with consistent performance.
Expected release date: "real soon now"

Friday, August 19, 2011

Speaking at MDC in September

I'll be speaking at the Minnesota Developers Conference on September 29th (http://mdc.ilmservice.com). 

Topic: "An ALT.NET Microsoft cloud: Developing and hosting ASP.NET MVC apps in the cloud without Azure. Demonstration and discussion of alternate tools and platforms for developing cloud based .NET applications including: Managing dependencies with Nuget. Source code control with Mercurial and Bitbucket. Continuous Integration and hosting with AppHarbor and Amazon EC2. NoSQL database backends with Amazon Simple DB."

Monday, June 27, 2011

ScrappyDB: Code First + Linq for Amazon SimpleDB

For several years I've been working on ScrappyDB, which is my personal open source .NET library for Amazon SimpleDB.

I quietly released version 1.0 on CodePlex (http://scrappydb.codeplex.com/) several years ago, and despite a few downloads I'm not aware of a single production user (other than me). Not to worry... no hard feelings... I wrote this for my own use and released it in hopes that it "might" prove useful for somebody else someday...

This month I released version 2.0 to Nuget (http://nuget.org/List/Packages/ScrappyDB) with slightly higher expectations: "ScrappyDB 2.0 is a code first style object mapping library and Linq provider for Amazon SimpleDB. Also included is an ASP.NET Membership Provider for Amazon SimpleDB."

With support for "Entity Framework style" code first syntax and Linq I'm hopeful that a few other people might actually find this useful. But nobody is going to use it if they can't find it and so I'm taking it upon myself to start writing a few blog posts demonstrating how it works, and eventually to highlight how easy it is to use Amazon SimpleDB as a backend for .NET websites.

To get started here are a few code examples that demonstrate using ScrappyDB to run the example queries found in the excellent SimpleDB Query 101 document from Amazon:

Following EF Code First conventions we create a class to define our schema and then a context class to access the database. The only syntactic difference between EF and ScrappyDB at this point is "SdbContext" in place of "DbContext".
  
public class AwsSampleContext : SdbContext
{
public SdbSet Books { get; set; }
}

public class Book
{
public string Id { get; set; }
public string Title { get; set; }
public string Author { get; set; }
public string Year { get; set; }
public int YearInt { get; set; }
public int NumberOfPages { get; set; }
public List Keywords { get; set; }
public List Rating { get; set; }
}

To execute the sample query:
select * from mydomain where Title = 'The Right Stuff'
All we would need to do is:
var dbContext = new AwsSampleContext();
var result = from a in dbContext.Books where a.Title == "The Right Stuff" select a;
SimpleDB only supports one datatype which is a string. This means no native numeric types, no datetime. ScrappyDB supports strongly typed data for a subset of common .NET types.

Linq was designed generically and doesn't always map well to underlying database features and implementation oddities. This is particularly true with SimpleDB, where the "simple" feature set can't support many concepts that are typical in Linq to SQL.

Here is another sample query from the Query 101 document, and several different ways to approach implmenting it in ScrappyDB:
select * from mydomain where Year > '1985'
The .NET centric way to implement this in ScrappyDB would be to define the Year field as a numeric type, in which case the Linq query would be as you might expect from Linq to SQL:
var dbContext = new AwsSampleContext();
var result = from a in dbContext.Books where a.YearInt > 1985 select a;
If you defined the field "Year" as a string in your .NET class then you will need to use the ScrappyDB specific extension method "GreaterThan" to implement this query:
var dbContext = new AwsSampleContext();
var result = from a in dbContext.Books where a.Year.GreaterThan("1985") select a;
If you would like to see more you can check out the source code on BitBucket (https://bitbucket.org/scrappydog/scrappydb). The Integration Tests in the source implement almost all the examples from the Query 101 document.

To see ScrappyDB in a Sample MVC 3 application check out the sample code on BitBucket (https://bitbucket.org/scrappydog/scrappydb.mvcsample) or check out the sample application running on AppHarbor (http://sample.scrappydb.com/).

Note: AppHarbor runs on Amazon EC2, and so it is a GREAT place to host applications that use SimpleDB as a backend!

The easiest way to try ScrappyDB for yourself is to add it into your Visual Studio project with Nuget (http://nuget.org/List/Packages/ScrappyDB).

Wednesday, September 22, 2010

Great tip on serving compressed CSS and Java Script files from Amazon S3

Fantastic performance tip on hosting compressed CSS and javascipt file from Amazon S3:

"Typically, I use a tool called Juicer ( http://github.com/cjohansen/juicer) to concatenate and compress my CSS and JavaScript files (with YUI Compressor and Google Closure Compiler, respectively) to ensure they're as small as possible and require the fewest number of HTTP requests ( http://developer.yahoo.com/performance/rules.html#num_http).

After that, I launch Terminal and run `gzip -9 filename.css && gzip -9 filename.js` to compress them as tightly as possible. This will give me filename.css.gz and filename.js.gz. I remove the .gz extension, and upload those files to S3.

Lastly, I add a custom HTTP header — `Content-Encoding: gzip` — to each of the files in S3. This tells the browser the same thing as Apache would if it were compressing them on the fly. The browser then knows to decompress the content after downloading it.

Since the files are pre-compressed instead of compressing on the fly (a la Apache), fewer server resources are used and the response times are faster. "



Original source: http://developer.amazonwebservices.com/connect/thread.jspa?messageID=194918&tstart=0#194918

Tuesday, September 14, 2010

Last day on the 3M VAS project

Today marks my last official day on the 3M VAS project (http://vas.3m.com).  It's been fun getting to develop a production site for a major company with Silverlight 4 and Azure!

Mike Hodnick, one of my colleagues on the development team has a nice wrap up post about the project on his blog:  http://hodnick.com/post/1121185443/vas3m

(Just to be clear: 3M VAS is very much alive and well.  This is just the end of a major release cycle and most of the development team is moving on to other projects.)

Tuesday, September 7, 2010

Moving to Google Voice as my primary public phone number

I've been playing around with Google Voice for a while know and I really like the speech to text transcription it does on voicemails, and so I've decided to start using it as my primary phone number for all business stuff.

Interesting to note how much Google and Amazon cloud stuff I use in my day to day life... and nothing from Microsoft... hmmm...

Friday, August 13, 2010

Desktop performance tweaking: using an SSD for ReadyBoost and Virtual Memory Paging File

Little known fact: Windows 7 CAN use an internal SSD drive for ReadyBoost.

Example scenario where it makes "some" sense: You have an existing Windows 7 desktop that could use a performance boost, but you don't have time to reinstall/migrate the boot drive to an SSD.

* Install a cheap SSD, and configure it for ReadyBoost (just right click on the drive the same as you would a USB Flash drive). NOTE: ReadyBoost will only use 4GB of the drive.

* Move your virtual memory paging file from the boot drive to the SSD.

* Move any data files you are working with actively to the SSD (such as source code if you are developer).

Total time to implement: About 10 minutes
Performance improvement: Noticeable/useful but not magical