Monday, May 25, 2015

MongoDB and .NET POCOs

While creating MongoDB documents, used nested objects and nested collections to take advantage of the way the POCO objects can be stored in the MongoDb documents doing away with the typical SQL impedance mismatch that ORMs try to solve.

By having nested objects within the same document, there are no need for JOINs and any constraints can live within the documents in a collection, just like relationships can exist between different tables in a relational database. That is feel free to use nested collections like arrays within your document or POCO object.

The MongoDB driver takes care of serializing your POCOs to MongoDB documents (internally stored as binary BSON documents in MongoDB) and deserializing them back to POCO objects without any noticebale friction, making programming very simple against MongoDB once the appropriate driver (mongo C# driver in our case) is installed in your project.

Avoid laying out your POCO objects based on your experience of creating POCO objects for SQL (multiple lean POCO entities with associations that turn into foreign key constraints). So stop replicating SQL in your POCO objects when working with MongoDB. So in MongoDB, take advantage of creating your rich domain models with nested objects, arrays and nested collections. Akin to pre-joining your SQL data across several tables out of the box!

So any data you typically work with at the same time, combine them into one single document for your document oriented design. The concept of an aggregate root in Domain Driven Design theory maps very nicely to Documents.

TIPS:

1) While declaring a decimal type field in your .NET POCO object, make sure you decorate it with the [BsonRepresenation(BsonType.Double)] attribute to convert it to the BSON double type. Otherwise the decimal CLR type is converted to string making any comparisons or indexing against the field string oriented and slow. Example, you want to sort and filter on a decimal Price field.

2) Also in the default serialization .NET characters and enums are represented as integers when a string may be desirable. So keep that in mind.

3) For DateTime, Mongo always uses UTC while .NET can use UTC as well as Local. This can have side-effects if we are not explicit about what date time type we want in our POCO object. So if using .NET Local time, always use the attribute [BsonDateTimeOptions(Kind = DateTimeKind=Local)]. Then when we deserialize the object back into .NET type using the BsonSerializer.Deserializer<T>, it will use the Local time zone correctly and not UTC.

4) Use DateOnly=true in your BsonDateTimeOptions if we want to ignore the time portion.

5) Use BsonIgnore attribute to ignore fields in the document

6) If you want to use a different field name rather than the default _id field, use BsonId attribute

7) At the class level, decorate with [BsonIgnoreExtraElements] if we want to remove fields from the document, without worrying with a file format exception that will be otherwise thrown by the Bson driver.





Sunday, May 24, 2015

Using Redis Client Code Snippets with ServiceStack

Code snippets in an MVC application:

using (IRedisClient redisClient = new RedisClient())
{      
      var userClient = new redisClient.GetTypedClient<User>();      
      var user = new User {                            
                                        Name = "Clint",                           
                                        Id = userClient.GetNextSequence()                     
      };               
      userClient.Store(user); // Save new User
}

On the redis-cli.exe command line, type monitor to see what's happening when you run the app.


MVC View to submit UserId from a dropdown selection to another Controller's action:

Select User:
@using (Html.BeginForm("Index", "Tracker", FormMethod.Post))
{     
    @DropDownList("userId", null, string.empty, new {@onchange = "form.submit();"});
}
@Html.ActionLink("Create New User", "NewUser", "Users");

Now with Redis, for Updates we don't really do an UPDATE like SQL. Instead we fetch the User, edit the data and RE-SAVE the User object.

And here's how you would associate a list of data with an User key-value pair. Create a set (list) which stores the UserId as a key value and a list of data (say integer values) for each Userid.

using (IRedisClient redisClient = new RedisClient())

      long userId = 213;
      int amount = 100;
    
      var userClient = new redisClient.GetTypedClient<User>();
      var user = userClient.GetById(userId);
  
      var historyClient = redisClient.GetTypedClient<int>();
      var historyList = historyClient.Lists["urn:history:" + userId];

      user.Total += amount;       
      userClient.Store(user); // Update User

      historyList.Prepend(amount); // always push to first item in the list
      historyList.Trim(0, 4); //restrict items in list to always 5 items

      /* Add to a sorted set that stores Users with key=name and value=total
         The beauty of Redis is it will not cause dups here and just updates the Total
         and maintains this sorted list! No code to write to check for dups etc.
     */
      redisClient.AddItemToSortedSet("urn:leaderboard", user.Name, user.Total);

      ViewBag.HistoryItems = historyList.GetAll();
}


//And to retrieve the leaderboard set, just call a method on the Redis client:
var leaderboard = redisClient.GetAllWithScoresFromSortedSet("urn:leaderboard");

Note; To avoid duplicates getting into the sorted set if we change the name of the User, make sure to remove the User from the sorted set whenever we "update", and then adding the User back again to the Sorted Set when we re-save the User. To remove the User from the sorted set, call redisClient.RemoveItemfromSortedSet("urn:leaderboard", user.Name).

And there you go! As simple as that.

TODO:

  • Add the ability to delete Users (slightly complicated as you need to keep track of where all your User data are)
  • Moving the history data to the User object rather than having it in its own list of integers
  • Store the User data in a hash. Is there any benefit to that?


Reference: Pluralsight training on Building NoSQL With Redis by John Sonmez
http://www.pluralsight.com/courses/building-nosql-apps-redis



Thursday, May 21, 2015

AngularJS Directive - Inherited Scope

Here's an example of a directive using inherited scope in AngularJS. See the plunk here:
http://plnkr.co/S8civ0l5KRL5cN2PBBJF

//index.html
<!DOCTYPE html>
<html ng-app="app">
  <head>
    <script data-require="jquery@2.1.3" data-semver="2.1.3" src="http://code.jquery.com/jquery-2.1.3.min.js">           </script>
    <link data-require="bootstrap@*" data-semver="3.3.2" rel="stylesheet"     href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css" />
    <script data-require="bootstrap@*" data-semver="3.3.2" src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.2/js/bootstrap.min.js"></script>
    <script data-require="angular.js@1.3.0" data-semver="1.3.0" src="https://code.angularjs.org/1.3.0/angular.js"></script>
    <link rel="stylesheet" href="style.css" />
    <script src="script.js"></script>
  </head>
  <body ng-controller="mainCtrl" class="container">
    <h5>
      <user-info-card user="user1"></user-info-card>
      <user-info-card user="user2"></user-info-card>
    </h5>
  </body>
</html>

/***************************************************************/
// script.js
angular.module('app',[]);
angular.module('app').controller('mainCtrl', function($scope){
  $scope.user1 = {
    name: "Luke Skywalker",
    address: {
      street: '6332 Cameron Forest Ln',
      city: "Charlotte",
      planet: "Earth"
    },
    friends: ['Han', 'Leia', 'Chewbacca']
  };

    $scope.user2 = {
    name: "Mia Farrow",
    address: {
      street: '1236 Cameron Forest Ln',
      city: "Charlotte",
      planet: "Vulcan"
    },
    friends: ['Lan', 'Tia', 'Chewbacca']
  };

  //console.log('parent scope', $scope);
  //Take this in the directive, so we don't break encapsulation and the directive handles this event
  /*
  $scope.knightMe = function(user){
    user.rank = "knight";
  }
  */
});
angular.module('app').directive('userInfoCard', function(){
  return {
    restrict: "E",
    templateUrl : "userInfoCard.html",
    replace: true, // will replace directive with the contents and not display the directive tag in the html element as best practice
    //scope: true, //inherited scope; false = shared scope with parent containing controller is the default
                 //scope: {}, would be isolated scope and the directive cannot see the controller's scope now
    scope: {
        user :"="
      }, //isolated scope but now controller(parent) scope user object can be shared by passing in
         // an user object user1 from the controller. Useful as the same directive can now be used across multiple controllers
         // and they can pass in thier scope user objects.
    controller: function($scope){
        $scope.collapsed = false; //parent variable
        $scope.knightMe = function(user){
              user.rank = "Knight";
         };
        $scope.collapse = function(){
          $scope.collapsed = !$scope.collapsed;
        }
    }
  }
});
//Put Address in its own directive so we can collapse it inside the panel-body of the useInfoCard directive.
//This will demonstrate inherited scope
angular.module('app').directive('address', function(){
  return {
    restrict: "E",
    scope: true,//THIS IS THE MAGIC ALLOWING Address to define its own collapsed variable, otherwise                     the variable is shared by default and clicking the address would close the entire panel-body
 
    templateUrl: "address.html",
    controller: function($scope){
   
      /* This statement below assigns another collapsed variable to address because of JavaScript's               prototypical inheritance.
         Otherwise, without this assignment statement that effectively creates(overrides) the collapsed variable for Address,
         it has the collapsed variable from the parent scope (userInfoCard directive in the html as address is nested inside that directive).
         Scope inheritance (setting scope: true in the child directive as above) thus propagates the scope from parent to child nested directive.
         In short, inheritance scope is very powerful as child directive can see properties in the parent scope, but be careful with it.
      */
      $scope.collapsed = false;
   
      $scope.collapseAddress = function(){
        $scope.collapsed = true;
      }
   
      $scope.expandAddress = function(){
        $scope.collapsed = false;
      }
    }
  }
})
                 // Note that we cannot make this isolated scope here because then user object in parent                                directive won't be accessible in the child directive

/*************************************************************************/

//userInfoCard.html parent directive
<div class="panel panel-primary">
  <!-- Have to wrap all elements within a root element for replace:true to work in  the directive -->
  <!-- The canonical Component directive. Almost always implemented as an element, defines a new widget-->
      <div class="panel-heading" ng-click="collapse()">{{user.name}} </div>
      <div class="panel-body" ng-hide="collapsed">
         <!--
         <h4>Address:</h4>
         <div ng-show='!!user.address' >
           {{user.address.street}}<br />
           City: {{user.address.city}}
         </div><br/>
        -->
        <!--Address is now in its own directive with inherited scope from userInfoCard directive-->
        <address></address>
     
        <h4>Friends:</h4>
        <ul>
          <li ng-repeat='friend in user.friends'>
            {{friend}}
          </li>
        </ul>
        <div ng-show="!!user.rank">
          <h4>Rank: {{user.rank}}</h4>
        </div><br/>
        <button class="btn-success" ng-click="knightMe(user)" ng-show="!user.rank">Knight Me</button>
      </div>
</div>


/**********************************************************************/

//address.html child directive that inherits scope

<div ng-show="!!user.address && !collapsed" ng-click="collapseAddress()">
   <h4>Address:</h4>
   <div ng-show='!!user.address' >
     {{user.address.street}}<br />
     City: {{user.address.city}}
   </div>
 </div>
 <div ng-show="!!user.address && collapsed" ng-click="expandAddress()">
   <h4>Address:</h4>
     {{user.address.street}}...
 </div>


AngularJS Directives - Isolated Scope

Here's an example for an Isolated Scope directive .  See the plunk here.  http://plnkr.co/lS8QIhaRbjqLlA7jHm2U

// Script.js
angular.module('app',[]);

angular.module('app').controller('mainCtrl', function($scope){
  $scope.user1 = {
    name: "Luke Skywalker",
    address: {
      street: '6332 Cameron Forest Ln',
      city: "Charlotte",
      planet: "Earth"
    },
    friends: ['Han', 'Leia', 'Chewbacca']
  };

    $scope.user2 = {
    name: "Mia Farrow",
    address: {
      street: '1236 Cameron Forest Ln',
      city: "Charlotte",
      planet: "Vulcan"
    },
    friends: ['Lan', 'Tia', 'Chewbacca']
  };

  //console.log('parent scope', $scope);
  //Take this in the directive, so we don't break encapsulation and the directive handles this event
  /*
  $scope.knightMe = function(user){
    user.rank = "knight";
  }
  */
});

/***************************************************************************/

//userCardInfo.html directive
angular.module('app').directive('userInfoCard', function(){
  return {
    restrict: "E",
    templateUrl : "userInfoCard.html",
    replace: true, // will replace directive with the contents and not display the directive tag in the html element as best practice
    //scope: true, //inherited scope; false = shared scope with parent containing controller is the default
                 //scope: {}, would be isolated scope and the directive cannot see the controller's scope now
    scope: {
        user :"="
      }, //isolated scope but now controller(parent) scope user object can be shared by passing in
         // an user object user1 from the controller. Useful as the same directive can now be used across multiple controllers
         // and they can pass in thier scope user objects.
    controller: function($scope){
        $scope.knightMe = function(user){
        user.rank = "Knight";
      }
    }
  }
})

/***************************************************************************/

//index.html
<!DOCTYPE html>
<html ng-app="app">

  <head>
      <script data-require="jquery@2.1.3" data-semver="2.1.4" src="http://code.jquery.com/jquery-2.1.4.min.js"></script>
    <link data-require="bootstrap@*" data-semver="3.3.2" rel="stylesheet" href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css" />
    <script data-require="bootstrap@*" data-semver="3.3.2" src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.2/js/bootstrap.min.js"></script>
    <script data-require="angular.js@1.3.0" data-semver="1.3.0" src="https://code.angularjs.org/1.3.0/angular.js"></script>
    <link rel="stylesheet" href="style.css" />
    <script src="script.js"></script>
  </head>

  <body ng-controller="mainCtrl" class="container">
    <h5>
       <user-info-card user="user1"></user-info-card>
       <user-info-card user="user2"></user-info-card>
    </h5>
 
  </body>

</html>

The core strength of Angular - Directives

Directives are JavaScript functions that allow you to manipulate the DOM or add behavior to it. That in a single sentence aptly describes what a Directive is in AngularJS.

Directives can be Angular predefined or custom directives. They can be very simple or extremely complicated. Getting a solid understanding of Directives is necessary to be a good Angular developer.

Each directive undergoes a "life cycle" as Angular compiles and links it to the DOM.
In a directive’s life cycle, there are four distinct functions that can execute if they are defined. Each enables the developer to control and customize the directive at different points of the life cycle. The four functions are

  • compile
  • controller
  • pre-link
  • post-link

The compile function allows the directive to manipulate the DOM before it is compiled and linked thereby allowing it to add/remove/change directives, as well as, add/remove/change other DOM elements.
The controller function facilitates directive communication. Sibling and child directives can request the controller of their siblings and parents to communicate information.
The pre-link function allows for private $scope manipulation before the post-link process begins.
The post-link method is the primary workhorse method of the directive.
Commonly the directive is defined as
  .directive("directiveName",function () {

    return {

      controller: function() {
        // controller code here...
      },
  
      link: function() {
        // post-link code here...
      }
    }
  })
The execution order of the functions within a directive and relative to other directives:
<div parentDir>
  <div childDir>
    <div grandChildDir>
    </div>
  </div>
</div>
The AngularJS directive function execution order relative to other directives.
Reference: http://www.toptal.com/angular-js/angular-js-demystifying-directives

Tuesday, May 19, 2015

JavaScripts Tips and Nuances

JSFiddles created for my own learning and reference:

On Closures
http://jsfiddle.net/yaraj/qfuf3L6n/3/

Prototypal inheritence
http://jsfiddle.net/yaraj/c68rp5fd/1/

Sunday, May 17, 2015

Simple steps for creating tasks in Gulp task runner for your SPA application using Visual Studio

First from within Visual Studio, go to Extensions and Updates and install the following 3 nuget packages:

  • GruntWatcher   (works with Gulp too!)
  • Package Intellisense ( helps with intellisense in npm)
  • Task Runner Explorer (helps run Gulp)


Then in your solution add a package.json file if not already exists. One way to do it is simply create one using the command prompt by going to your solution root directory as

      npm init --yes

and hit enter for defaults. Now include this file in your solution from within Visual Studio, so Node can use this.

Next, create a file in your Visual Studio solution called gulpfile.js

Create your gulp tasks in gulpfile.js like below

var gulp = require('gulp');
var concat = require('gulp-concat');
var angularFileSort = require('gulp-angular-filesort');
var strip = require('gulp-strip-line');
var templateCache = require('gulp-angular-templatecache');

gulp.task("buildMenuTemplateCache", function(){
    return gulp
           .src(["./ext-modules/menu/**/*.html"])
           .pipe(templateCache({
                 root : "./ext-modules/menu/",
                 module: "menu"
                }))
           .pipe( gulp.dest("./ext-modules/menu/"));        
});

gulp.task("buildJS", function(){
     return gulp
                 .src(["./ext-modules/**/*.js"]),  //specify source
                 .pipe(angularFileSort()),  //sorts js files in same folder so module comes first
                 .pipe(strip(['use strict'])),    //removes use strict
                 .pipe(concat('framework.js')),    //concatenate all files into single file
                 .pipe(dest('./dist/'));    //specify destination 
});

gulp.task("buildCSS", function(){
     return gulp.src(),
                 .pipe(),
                 .pipe();    
});

The above file is going to run in Node and not on the browser. So we need to install each of the required packages locally in the project using --save-dev flag like so:

     npm install --save-dev gulp

Now if we look in our package.json file, we should see all our devDependencies listed in the file.
To run each individual task in gulpfile.js from Visual Studio, just right click the file and run the task. Doing so will produce the output of the task. We can also run the tasks using Task Runner Explorer from View->Other Windows menu item.

 Finally, include the output file produced by the task runner in the distribution folder using "Show all files' in the solution. Then we need to import just this one single js file in our index.html file for all the javascript files! Similarly include a single .CSS file in index.html.

References:
 http://www.pluralsight.com/training/player?author=mark-zamoyta&name=building-spa-framework-angularjs-m7&mode=live&clip=8&course=building-spa-framework-angularjs

http://www.toptal.com/nodejs/an-introduction-to-automation-with-gulp

Saturday, May 16, 2015

Web API v2 security

The Web API security pipeline consists of

  • Katana Middleware
  • message handlers is legacy (Http modules that is baked in asp.net hosting in IIS)
  • Authentication filters
  • Authorization Filters



The whole idea about Katana and OWIN being to be able to self host anywhere and getting away from IIS and System.Web dependency

The new kid in block to work with client identity is
HttpRequestMessage.GetRequestContext().Principal
and using Thread.CurrentPrincipal is now legacy

Read http://chimera.labs.oreilly.com/books/1234000001708/ch10.html
http://www.hanselman.com/blog/SystemThreadingThreadCurrentPrincipalVsSystemWebHttpContextCurrentUserOrWhyFormsAuthenticationCanBeSubtle.aspx
http://leastprivilege.com/2012/06/25/important-setting-the-client-principal-in-asp-net-web-api/

Performing MongoDB atomic updates

NoSQL databases normally don't deal with transactions. Transactions can generally be avoided with good schema design and the canonical account debit/credit transaction is not really an instant transaction, if you ever transferred your money to someone right? Anyway, you can handle transactions from your business layer if needed. Having said that, MongoDB does allow for atomic updates spanning multiple documents, but they should all be within the same collection. It is very efficient with minimal data transfer.

The MongoCollection.Update has two parts - Query and Update.
Each can be type safe and consider using Query<T> and Update<T> helper classes.

articles.Update(query, update);

//Single Update
articles.Update( Query<Article>.EQ ( a => a.Id, "123A"),
                          Update<Article>.Set ( a => a.Expires, DateTime.Today));

//Multiple Updates
articles.Update( Query<Article>.LT( a => a.Expires, DateTime.Today),
                          Update<Article>.Set ( a => a.Archived, true));


Note that this bulk update happens all server side and you don't have to fetch all the data on the client to update!

There is no concept of foreign keys and joins between different collections, but we can create "foreign keys" to other collections by embedding the ObjectId of the document in one document from a collection to another document in a different collection. To do any kind of "joins", you would have to run 2 queries to fetch the related documents between the two collections.

Thing is you need far fewer collections in NoSQL than in Relational model. Fewer Joins translates into more scalability. The trade off is redundant data being stored and more memory. But the memory is actually mitigated using sharding (and replication for data availability and failover) and actually often works out to be far more cheaper than provisioning sophisticated multi-license expensive data centers to handle terabytes of relational data.

Takeaways:

  • Your Mongo context is your implicit schema
  • Use Strongly typed collection to respect the schema
  • Use Mongo + LINQ
  • Use Update instead of Retrieve + Save


MongoDB Useful CRUD Extension pattern using LINQ

Say if we want to remove some documents from a collection, we can use collection.Remove() by passing in a query like so

articles.Remove(Query<Article>.EQ ( a => a.PublishedDate < DateTime.Today ));

We can create a LINQ extension class for the MongoDB collection

public static class MongoExtensions
{
    public static WriteConcernResult Remove<T> ( this MongoCollection<T> collection,
                                                                                 Expression<Func<T, bool>> query)
    {
          return collection.Remove(Query<T>.Where(query));
    }
}


Then you can have your zen coding moments!

articles.Remove( a => a.PublishedDate < DateTime.Today );

Wednesday, May 13, 2015

Redis Use Cases

Redis -REmote DIctionary Service is a key value store which has the ability to store some data, called a value inside a key. This is the essence of a NoSQL database. It is touted as the world's fastest distributed NoSQL data store. Data is available in RAM but unlike Memcache it is also persisted and has failover, replication etc.
About 100K to 400K simple operations per second on a Intel Core 2 Duo 2.6GHz CPU.

Example string data structure
SET person:myname "Frybo"
GET person:myname

No Schemas and no schema migrations! ServiceStack is the most popular C# client API.
Redis is different from document database NoSQL like Couchbase, RavenDB and MongoDB in that primarily it is resident in memory and not persisted in disk as documents, and have no indexes. It's just a key value store where the value can be unstructured, semi and structured data types from strings, hash, lists (5 data types as of now). It is extremely fast though it has no indexes in most use cases and essentially the raw power of Redis comes from it being resident in memory.

Good use cases for Redis:
  • analytics
  • task queues
  • Caching
  • Data that expires
  • Cookie Storage
  • Search engines
  • Ad Targeting
  • Forums
  • Messaging (using Pub/Sub)
  • High I/O Workload
  • Geo searches
Bad Use Cases

  • More data than can fit in RAM
  • Data that fits relational model - have relations and need joins
  • You need ACID transactions

Side Note:
For  the task queues or queuing system, Resque is popular with Redis. Resque’s real power comes with the Redis “NoSQL” Key-Value store. While most other Key-Value stores use strings as keys and values, Redis can use hashes, lists, set, and sorted sets as values, and operate on them atomically. Resque leans on the Redis list datatype, with each queue name as a key, and a list as the value. Read http://girders.org/blog/2011/10/30/how-queuing-with-resque-works/
Must see videos:
 https://www.youtube.com/watch?v=8Unaug_vmFI
https://www.youtube.com/watch?v=CoQcNgfPYPc


Sunday, May 10, 2015

Role Based Security in .NET

In .NET prior to 4.5 was based out of IIdentity and IPrincipal which is an abstraction over how you would do authentication and authorization whether it is

  • Custom - GenericIdentity and GenericPrincipal
  • Windows - WindowsIdentity and WindowsPrincipal

Like Forms Authentication and <authorization /> (ASP.NET web.config) is SQL implementation of custom with membership and roles. But now we have a better way with Claims implementing ClaimsIdentity and ClaimsPrincipal (next blog post).

interface IIdentity
{
   string name;
   bool isAutheticated;
  string AuthencticatedType;
}

interface IPrincipal
{
   IIdentity identity;
   bool IsInRole();
}

Once the current user principal object is created, attach it to the Thread.CurrentPrincipal static property from where you can access it anywhere in your code. Though static, it is per thread so no thread collisions here and you can always get your correct security context in the thread.

TIP: Never check for Role using string role name as it can change, literal strings will not work with Globalization and will break your code. Instead use the SID (SecurityIdentifer id) values - those never change. also, unless the user is in the domain admin group, if UserAdmin is activated (windows 2003 and above?) then a user in the Builtin Admins group will not be automatically in the group unless explicitly/manually assigned! So that's why you need to execute some apps with Run As Administrator otherwise you might get thrown a security exception!(even though you are in the admin group)



Saturday, May 9, 2015

SQL Tuning test environment

Setting up a stable and predictable test environment is crucial to any performance tuning and measurements.

Clear Caches:
  • Execute Checkpoint to flush the dirty pages out into the IO sub-system
  • Clear the plan cache using DBCC FREEPROCCACHE ( clears plan cache for all db instances in the box), you can clear for only a db with FLUSHPROCCACHEINDB(db_id) or even better just for that individual plan knowing the plan handle id.
  • Run DBCC DROPCLEANBUFFERS - clears all data from memory -NEVER do this in PROD!
Set Measurements:
  • SET Statistics time on - elapsed time and CPU time
  • set statistics io on - logical reads (data from memory) and physical reads(from IO - slower)
  • Use Graphics Execution plan or SET SHOWPLAN TEXT|XML
Use DMVs:
  • System Info - sys.dm_os_wait_stats,  sys.dm_os_performance_counters
  • Query Info - sys.dm_exec_requests
  • Index Info - sys.dm_db_index_usage_stats
TIPS:

1) Look out for table scans and clustered index scans - not necessarily bad say if it is only 64Kb, that is eight 8Kb pages, but one has to assume that the table could be very large. Remember that the clustered index scan is the scan of the table in its physically sorted order!

2) Look out for lookups - bookmark(legacy), Key and RID lookups. We might benefit from a new index?

3) check for spools - means something is getting spooled out to tempdb - so a lot of IO is taking place.

4) Parallelism - may be good or bad for the plan with MAXDOP?

5) And here's one BIG tip - if the stats between the estimated and actual execution plans are wildly different, that means your stats are usually (not always) very outdated/stale - indexes have not been updated or rebuilt for a long time! It could also be incorrect because the optimiser has no information, like form a scalar UDF, table variable or multistatement TVF - assumes only 1 row coming out. Another cause could be due to parameter sniffing - so sniffing is a double edge sword.

6) If too many physically reads, usually a sign of lack of memory and you can throw a lot of memory to solve that problem!

7) Whenever there is Sorting , say in  hash joins or some kind of merge joins, it means Sql Server will use tempdb (physical io) a lot for the sorting operation,

8) and then there is Implicit Data Conversions - avoid them.

9) Note that Cursors are not always bad - in fact before sql 2012, they are the most efficient for running totals and say keeping a running balance - it will outperform self-joins! In 2012 they introduced windowing functions like LAG and LEAD that will outperform Cursors. A lot of DDL uses Cursors. Another good use case, is maintenance scripts - say calling a stored procedure for every database in some kind of loop, or sending an email from a list etc. There is no good way to do that in a set.

From Adam Machanic https://www.youtube.com/watch?v=GSZPvF2u6WY
The top 5 culprits are
Lookup
Sort
Spools
Hash
Nested Loops (serial)
and of course Table and Index scans

Also look at moving the fat arrows from left to right (logical interpretation) in the execution plan.
Then the optimizer can filter the rows earlier. That's another reason why Nested views are no good. The optimizer cannot filter the data using the where clause predicates. Replacing the views with an inline table valued function is the solution as the optimizer now knows how to build the correct estimated plan for the query.

Sunday, May 3, 2015

WebSockets, SignalR and Real Time web applications

WebSockets changed the way we can communicate in our web apps between the client and server. Previously we could keep http 1.1 persistent connection between client and server, but it was used only for the client to keep the connection open for making another request to get any updates.

The TCP protocol used by websockets now allows the server to communicate to the clients without the clients initiating a request! In short, it allows for bi-directional full-duplex communication between the client and the web server.

That means these messages can be transmitted between client and server concurrently. And the messages don't rely on any rigid specfications like headers and cookies, which means it can be very lightweight and very fast!


The WebSockets API looks very simple

var socket = new WebSocket("ws://echo.websockets.org");

//Once connection opens, send message to server
socket.send("hello from websockets!");

//To receive messages from the server, subscribe to an onmessage event
socket.onmessage( function(event){
            alert("I got server data! " + event.data);
});

But to work at this low level, you still have to worry about serializing data to send to the server and deserializing data from the server, managing connections, supportability in browsers, ordering of messages etc. The answer - SignalR!

SignalR is just an encapsulation wrapper around WebSockets so we no longer need to use long polling Javascript methods to get instant updates from the Server. This opens up a slew of opportunities for developing all kinds of real time web applications.

Learning demo on GitHub:
https://github.com/yashish/RealTime-PerfMon-using-SignalR-Knockout.git

Kendo Grid and Chart

Wetting my hands with the Kendo widget

http://jsfiddle.net/yaraj/hfwzzss1/13/

Saturday, May 2, 2015

AngularJS ui-router nested states Hint

The cool thing about nested states in ui-router is we can define a resolve property to materialize any data (say from a service) in the parent state, and this data is now available to all the children states! Very cool, as this allows data to be shared across different states - read different views - without fetching them again and again and retaining context. Note that a single controller can be used to serve all the states within a nested states hierarchy.

Example code below. The coupon data from the parent state is available across all nested views!

.state("couponEditParent", {
                abstract: true,
                 url: "/coupons/edit/:couponId",
                 templateUrl: "app/coupons/couponEditView.html",
                 controller: "CouponEditCtrl as vm",
                 resolve: {
                       couponService: "CouponService",
                       coupon: function(couponService, $stateParams) {
                                           var couponId = $stateParams.productId;
                                           return couponService.get({couponId: couponId})
                                    }
                       }
  })
.state("couponEditParent.basicInfoChild", {
                 url: "/basicInfoChild",
                 templateUrl: "app/coupons/couponEditBasicInfoView.html"
  })
.state("couponEditParent.detailsInfoChild", {
                 url: "/detailsInfoChild",
                 templateUrl: "app/coupons/couponEditDetailsInfoView.html"
  })

Note that if your app does not require activating the Parent state alone (without activating a child state), ui-router provides a property called "Abstract State" that can be applied to the parent state, so it can never be explicitly activated. Activation attempt will throw an exception! It is only activated implicitly when one of the child states is activated.