ASP.NET Web API supports OData queries out of the box for all your complex data querying needs using the Web Service. There are just three steps you need to do to implement it. Make sure you install the OData nuget package though first.
1) Use the attribute [EnableQuery()] to decorate the targeted Web API end point
2) Make sure you return IQueryable (not IEnumerable or any other variant) from the REST end point
3) Add AsQueryable() at the end of your LINQ query so it can return the items as IQueryable.
Example:
[EnableQuery()]
public IQueryable<Order> Get()
{
return orderRepository.Retrieve().AsQueryable();
}
And that's it. You can skip, sort, take, filter and do all the things you need to shape your queries for returning your data to your UI in the manner you like. Of course, you need to use the OData query syntax from your UI code to hit the endpoints. All in all, a very simple API to learn and implement, rather than trying to roll out your own custom filtering, sorting et al code.
Friday, June 5, 2015
Thursday, June 4, 2015
ASP.NET Web API Authentication from Angular
When a user logs in to be authenticated by issuing a POST request from the browser using the default Authorization Service provided by ASP.NET Web API, the POST message is not in JSON format but URL encoded like so
userName=xyz%40domain.com&password=foo_Bar&grant_type=password
This necessitates that if we use the $resource RESTful service to hit the Authorization service /token endpoint, we need to convert the JSON message format that $resource spits out to match this URL encoded format.
To do this, we need to do two things
1) Specify in the POST Request Header that the Content-Type is URL encoded like so
Content-Type = application/x-www-form-urlencoded
2) Transform the JSON to URL encoded format using the transformRequest property of $resource. See code snippet
(function(){
angular.module('common.services')
.factory('userService',
['$resource', 'appConfigs', userService]);
function userService($resource, appConfigs) {
return {
register: $resource(appConfigs.serverURL + '/api/Account/Register', null,
{
'registerUser' : { method: 'POST'}
}),
login: $resource(appConfigs.serverURL + '/token', null,
{
'loginUser': {
method: 'POST',
headers:{'Content-Type':'application/x-www-form-urlencoded'},
transformRequest: function(data, headersGetter) {
var str = [];
for (var s in data)
str.push(encodeURIComponent(s) + '=' +
encodeURIComponent(data[s]));
return str.join('&');
})
};
}
})();
Once we successfully call this endpoint from the login button, we grab and store the access token returned from the Authorization Service, thereby allowing us to pass the token along for future requests.
var user = {};
....
vm.login = function() {
vm.user.grant_type = 'password';
vm.user.userName = vm.user.email;
userService.login.loginUser ( vm.user,
function(data) {
vm.isLoggedIn = true;
vm.message = '';
vm.password = '';
vm.token = data.access_token; //SAVING TOKEN HERE!!
},
//add response.data.error callback message to the vm.user.message property
....
);
};
userName=xyz%40domain.com&password=foo_Bar&grant_type=password
This necessitates that if we use the $resource RESTful service to hit the Authorization service /token endpoint, we need to convert the JSON message format that $resource spits out to match this URL encoded format.
To do this, we need to do two things
1) Specify in the POST Request Header that the Content-Type is URL encoded like so
Content-Type = application/x-www-form-urlencoded
2) Transform the JSON to URL encoded format using the transformRequest property of $resource. See code snippet
(function(){
angular.module('common.services')
.factory('userService',
['$resource', 'appConfigs', userService]);
function userService($resource, appConfigs) {
return {
register: $resource(appConfigs.serverURL + '/api/Account/Register', null,
{
'registerUser' : { method: 'POST'}
}),
login: $resource(appConfigs.serverURL + '/token', null,
{
'loginUser': {
method: 'POST',
headers:{'Content-Type':'application/x-www-form-urlencoded'},
transformRequest: function(data, headersGetter) {
var str = [];
for (var s in data)
str.push(encodeURIComponent(s) + '=' +
encodeURIComponent(data[s]));
return str.join('&');
})
};
}
})();
Once we successfully call this endpoint from the login button, we grab and store the access token returned from the Authorization Service, thereby allowing us to pass the token along for future requests.
var user = {};
....
vm.login = function() {
vm.user.grant_type = 'password';
vm.user.userName = vm.user.email;
userService.login.loginUser ( vm.user,
function(data) {
vm.isLoggedIn = true;
vm.message = '';
vm.password = '';
vm.token = data.access_token; //SAVING TOKEN HERE!!
},
//add response.data.error callback message to the vm.user.message property
....
);
};
Enabling CORS in your Web API
The same origin policy enforced in browsers as a defense against cross-site request forgery (CSRF) attacks has traditionally prevented a browser from calling an API method deployed in a separate domain or server from which the web application originated and is hosted.
But there are scenarios where you want your API to be accessible across domains - say internally across applications or if you are serving your Web API as a public API.
Before the advent of CORS (Cross Origin Resource Sharing), developers worked around this using JSONP (P for padding which implies the returned JSON needs to be parsed before deserializing it). JSONP makes use of calling the Web API over an http endpoint using a src attribute inside a <script> tag as <script> tags are allowed cross domain.
But JSONP is more like a hack and now modern browsers support CORS. To Enable CORS is very easy. Download the Nuget package for CORS and in your configuration setup in WebApiConfig.cs, make an entry as
config.EnableCORS();
And then as an example, just mark your controller class or GET action methods in your Web API with
[EnableCorsAttribute("http://localhost:52347", "*", "GET")]
The three parameters can be a list of comma separated values or * (for all). The first parameter is for the list of valid origins (the origin of the client apps that will be accessing the API), the second for the list of allowed headers and the third for a list of allowed HTTP methods.
Trivia:
1) IE will not treat a different port where your Web Service is deployed as a different domain in your local environment. But when it is deployed in separate servers, then it will enforce the same origin policy.
2) Use JsonFormatter's ContractResolver in the WebApiConfig.cs file to keep your C# property names as PascalCase and JSON properties as CamelCase so you don't have to compromise on best practices on either end.
config.Formatters.JsonFormatter.SerializerSettings.ContractResolver =
new CamelCasePropertyNamesContractResolver();
But there are scenarios where you want your API to be accessible across domains - say internally across applications or if you are serving your Web API as a public API.
Before the advent of CORS (Cross Origin Resource Sharing), developers worked around this using JSONP (P for padding which implies the returned JSON needs to be parsed before deserializing it). JSONP makes use of calling the Web API over an http endpoint using a src attribute inside a <script> tag as <script> tags are allowed cross domain.
But JSONP is more like a hack and now modern browsers support CORS. To Enable CORS is very easy. Download the Nuget package for CORS and in your configuration setup in WebApiConfig.cs, make an entry as
config.EnableCORS();
And then as an example, just mark your controller class or GET action methods in your Web API with
[EnableCorsAttribute("http://localhost:52347", "*", "GET")]
The three parameters can be a list of comma separated values or * (for all). The first parameter is for the list of valid origins (the origin of the client apps that will be accessing the API), the second for the list of allowed headers and the third for a list of allowed HTTP methods.
Trivia:
1) IE will not treat a different port where your Web Service is deployed as a different domain in your local environment. But when it is deployed in separate servers, then it will enforce the same origin policy.
2) Use JsonFormatter's ContractResolver in the WebApiConfig.cs file to keep your C# property names as PascalCase and JSON properties as CamelCase so you don't have to compromise on best practices on either end.
config.Formatters.JsonFormatter.SerializerSettings.ContractResolver =
new CamelCasePropertyNamesContractResolver();
Redis hash - To be or not to be
The primary data structure used by Redis to optimize memory utilization is the hash. When storing your object in a hash, you hash individual fields and store it with a hash. By using HMSET, you can set multiple fields using a hash in a single command. In implementation speak, the hash is actually a function that takes a parameter and hashes it - a good algorithm for the function would avoid hash collisions.
Now, why go through the extra effort of creating a hash and storing individual fields of the object with the hash, rather than storing the object as a single key value structure which makes it simpler to serialize/deserialize from JSON to your POCO objects? The answer is performance and yes, memory efficiency too.
With a hash, you are indexing the field and you can get to the value of a particular field of your object (stored with the hash) very quickly - not so if you are storing the whole object in just one KV pair.
So the bottom line is, if you need to read/write specific properties frequently within your object, consider storing it as a hash. If the object is retrieved most of the time as a whole, you can just store it as a simple string key value pair.
Note that the default C# driver as of today does not use hashes for storing objects by default.
For more reading on this topic, go to
http://redis.io/topics/memory-optimization
For an excellent blog on the five different data structures in Redis, go to
http://openmymind.net/2011/11/8/Redis-Zero-To-Master-In-30-Minutes-Part-1/
Now, why go through the extra effort of creating a hash and storing individual fields of the object with the hash, rather than storing the object as a single key value structure which makes it simpler to serialize/deserialize from JSON to your POCO objects? The answer is performance and yes, memory efficiency too.
With a hash, you are indexing the field and you can get to the value of a particular field of your object (stored with the hash) very quickly - not so if you are storing the whole object in just one KV pair.
So the bottom line is, if you need to read/write specific properties frequently within your object, consider storing it as a hash. If the object is retrieved most of the time as a whole, you can just store it as a simple string key value pair.
Note that the default C# driver as of today does not use hashes for storing objects by default.
For more reading on this topic, go to
http://redis.io/topics/memory-optimization
For an excellent blog on the five different data structures in Redis, go to
http://openmymind.net/2011/11/8/Redis-Zero-To-Master-In-30-Minutes-Part-1/
Wednesday, June 3, 2015
Using $httpBackend to mock up E2E Web Service API
Define a module in your cross cutting common services folder to set up the mock interceptor to the backend web services. This module would be injected as a dependency in your main module, and commented out when the actual RESTful API is available. Note that Angular has ngResource that makes it easy to work with a RESTFul API.
In the mocking module, do something like this:
var orders = [
{
orderId : 1,
orderDate : Date.now(),
totalPrice: 45.89,
category : "online"
},
{
orderId : 2,
orderDate : Date.now(),
totalPrice: 35.55,
category : "in-store"
},
//more order objects
];
var orderUrl = "/api/orders";
//get all Orders
$httpBackend.whenGET(orderUrl).respond(orders)
//get single Order - use a regex
var editingRegex = new RegExp(orderUrl + "/[0-9][0-9]*", '');
$httpBackend.whenGET(editingRegex).respond( function(method, url, data)
var order = { orderId: 0 };
var parameters = url.split('/');
var length = parameters.length;
var id = parameters[length - 1];
if (id > 0) {
for (var i=0;i<orders.length;i++) {
if (orders[i].orderId === id) {
order = orders[i];
break;
};
}
return [200, order, {}];
);
//Save or edit an order
$httpBackend.whenPOST(orderUrl).respond(function (method, url, data) {
var order = angular.fromJSON(data);
if (!order.orderId) { //new order
order.orderId = orders[orders.length - 1].orderId + 1;
orders.push(order);
}
else { //update order
for (var i=0;i<orders.length;i++) {
if (orders[i].orderId === order.orderId) {
orders[i] = order;
break;
}
};
}
return [200, order, {}];
});
//Ignore any other assets the app needs that we should not be intercepting
$httpBackend.whenGET("/app/").passThrough();
Voila! Using this mockup, the application will add new Orders to the in-memory array of Orders without the need to touch a backend API.
For more details, read up
https://docs.angularjs.org/api/ngMockE2E/service/$httpBackend
In the mocking module, do something like this:
var orders = [
{
orderId : 1,
orderDate : Date.now(),
totalPrice: 45.89,
category : "online"
},
{
orderId : 2,
orderDate : Date.now(),
totalPrice: 35.55,
category : "in-store"
},
//more order objects
];
var orderUrl = "/api/orders";
//get all Orders
$httpBackend.whenGET(orderUrl).respond(orders)
//get single Order - use a regex
var editingRegex = new RegExp(orderUrl + "/[0-9][0-9]*", '');
$httpBackend.whenGET(editingRegex).respond( function(method, url, data)
var order = { orderId: 0 };
var parameters = url.split('/');
var length = parameters.length;
var id = parameters[length - 1];
if (id > 0) {
for (var i=0;i<orders.length;i++) {
if (orders[i].orderId === id) {
order = orders[i];
break;
};
}
return [200, order, {}];
);
//Save or edit an order
$httpBackend.whenPOST(orderUrl).respond(function (method, url, data) {
var order = angular.fromJSON(data);
if (!order.orderId) { //new order
order.orderId = orders[orders.length - 1].orderId + 1;
orders.push(order);
}
else { //update order
for (var i=0;i<orders.length;i++) {
if (orders[i].orderId === order.orderId) {
orders[i] = order;
break;
}
};
}
return [200, order, {}];
});
//Ignore any other assets the app needs that we should not be intercepting
$httpBackend.whenGET("/app/").passThrough();
Voila! Using this mockup, the application will add new Orders to the in-memory array of Orders without the need to touch a backend API.
For more details, read up
https://docs.angularjs.org/api/ngMockE2E/service/$httpBackend
Tuesday, June 2, 2015
SQL CE Index is not populated during a batch insert
Discovered a strange behavior with SQL CE (version 3.0) in a handheld mobile application the other day. One of the tables had a million+ rows with an index on the correct column for the query, but it was taking a few minutes to return the results.
The table was used as a template table with 0 rows to start with and then the million+ rows inserted into the table as part of the application logic. Though the index exists, it was discovered to be empty while running the command
sp_show_statistics 'table_name', 'index_name'
TABLE INDEX ROWS ROWS_SAMPLED STEPS DENSITY
Item UQ_Item_Index 0 0 0 0
The solution was to drop the index and create it after the rows are inserted in the application logic.
Once this change was made, the result returned in less than 1 second. There are a lot of peculiarities with the SQL CE database because of its limitations and I guess this is one of them.
Apparently the index is not populated if it exists already on an empty CE table, unless it is created after some data is inserted in the table.
TABLE INDEX ROWS ROWS_SAMPLED STEPS DENSITY
Item UQ_Item_Index 1129454 1129454 200 8.853835E-07
The table was used as a template table with 0 rows to start with and then the million+ rows inserted into the table as part of the application logic. Though the index exists, it was discovered to be empty while running the command
sp_show_statistics 'table_name', 'index_name'
TABLE INDEX ROWS ROWS_SAMPLED STEPS DENSITY
Item UQ_Item_Index 0 0 0 0
The solution was to drop the index and create it after the rows are inserted in the application logic.
Once this change was made, the result returned in less than 1 second. There are a lot of peculiarities with the SQL CE database because of its limitations and I guess this is one of them.
Apparently the index is not populated if it exists already on an empty CE table, unless it is created after some data is inserted in the table.
TABLE INDEX ROWS ROWS_SAMPLED STEPS DENSITY
Item UQ_Item_Index 1129454 1129454 200 8.853835E-07
Unit Test scaffolding with Jasmine
To run JavaScript Unit tests using the Jasmine Testing framework, install Jasmine as a Node plugin. Once installed it can also be incorporated as part of your build process and run as a task on each build. One can also download Jasmine directly and install it, rather than use NPM. Doing so as a standalone comes with it a html page SpecRunner.html. This html file loads the testing framework, the javascript files that need to be tested and executes the tests.
By convention, for your test files create a "spec" folder and name your test files like test1.spec.js where test1 can be any appropriate name for your test.
Use the snippet of a unit test specification to create the specs. Note the use of nested describe functions to group a set of unit tests.
describe("Expect", function(){ // Jasmine executes this function when the tests run
describe("false", function(){
it("to be equal to false", function(){
expect(false).toEqual(false); // expect function is the assert method
});
});
// Add other describe functions here like the spyOn test below
});
Jasmine interprets this spec as "Expect false to be equal to false" and give it this test name. The string "Expect" will be prepended to any tests within the group. The it function describes an actual test within the group. This function also takes 2 parameters just like the describe function.
We can also use the spyOn function to test(spy) if a function has been called, how many times it has been called and with what parameters.
describe("my coolFunction", function(){
it("is called", function(){
// create object wrapper around the function expected to be called
var thisIs = {
coolFunction : function (){}
}
spyOn(thisIs, "coolFunction");
thisIs.coolFunction(); //call the function
expect(thisIs.coolFunction).toHaveBeenCalled();
});
});
});
Instead of calling the actual method, we should actually create and use a mock of the function as all we are interested in knowing is whether the function has been called or not.
By convention, for your test files create a "spec" folder and name your test files like test1.spec.js where test1 can be any appropriate name for your test.
Use the snippet of a unit test specification to create the specs. Note the use of nested describe functions to group a set of unit tests.
describe("Expect", function(){ // Jasmine executes this function when the tests run
describe("false", function(){
it("to be equal to false", function(){
expect(false).toEqual(false); // expect function is the assert method
});
});
// Add other describe functions here like the spyOn test below
});
Jasmine interprets this spec as "Expect false to be equal to false" and give it this test name. The string "Expect" will be prepended to any tests within the group. The it function describes an actual test within the group. This function also takes 2 parameters just like the describe function.
We can also use the spyOn function to test(spy) if a function has been called, how many times it has been called and with what parameters.
describe("my coolFunction", function(){
it("is called", function(){
// create object wrapper around the function expected to be called
var thisIs = {
coolFunction : function (){}
}
spyOn(thisIs, "coolFunction");
thisIs.coolFunction(); //call the function
expect(thisIs.coolFunction).toHaveBeenCalled();
});
});
});
Instead of calling the actual method, we should actually create and use a mock of the function as all we are interested in knowing is whether the function has been called or not.
Source Map in Browserify
While using Browserify to bundle all your JavaScript files into a single app.bundle.js file, come the time for debugging, it is very difficult to locate the exact source file where an exception occurred from the single giant file.
That's where a source map that you create (say during development) comes handy to quickly locate which source file is throwing the exception. The source map is just a mapping (dictionary) of the actual source files that helps us identify the source files without actually loading them in the browser.
To create a source map, you can issue the following command when you create the bundle from the command line, using --debug (shorthand is -d)
C:\Projects\MyApp> browserify src/js/app.js -o dist/js/app.bundle.js --debug
With Browserify, it creates this source map within the app.bundle.js file after encoding it as data-url (for the browser to recognize) it at the end of the file.
Since this increases the size of the file, it is recommended to turn it off before deploying to Production.
On the same note, to ease development so we don't have to manually run browserify every time a change is made, install watchify which is available as node package. This is just like brunch watch if using brunch to build your js files.
npm install -g watchify
So after installing it run this command just once with the -v option to watch verbosely
C:\Projects\MyApp> watchify src/js/app.js -o dist/js/app.bundle.js --debug -v
Now as your project and the build process gets more complicated, you might like to not only browserify and watchify your JavScript files, but use a build task runner like Gulp or Grunt. This is so you can lint your files, browserify it, mimimize it, add any less or sass files for styling etc. If using a task runner, you would have the task runner create a task to run browserify. We could still use watchify when using Grunt to automatically rebuild the files, but Grunt also comes with its own GruntWatch node module. This can be created as a grunt.loadNpmTasks()
That's where a source map that you create (say during development) comes handy to quickly locate which source file is throwing the exception. The source map is just a mapping (dictionary) of the actual source files that helps us identify the source files without actually loading them in the browser.
To create a source map, you can issue the following command when you create the bundle from the command line, using --debug (shorthand is -d)
C:\Projects\MyApp> browserify src/js/app.js -o dist/js/app.bundle.js --debug
With Browserify, it creates this source map within the app.bundle.js file after encoding it as data-url (for the browser to recognize) it at the end of the file.
Since this increases the size of the file, it is recommended to turn it off before deploying to Production.
On the same note, to ease development so we don't have to manually run browserify every time a change is made, install watchify which is available as node package. This is just like brunch watch if using brunch to build your js files.
npm install -g watchify
So after installing it run this command just once with the -v option to watch verbosely
C:\Projects\MyApp> watchify src/js/app.js -o dist/js/app.bundle.js --debug -v
Now as your project and the build process gets more complicated, you might like to not only browserify and watchify your JavScript files, but use a build task runner like Gulp or Grunt. This is so you can lint your files, browserify it, mimimize it, add any less or sass files for styling etc. If using a task runner, you would have the task runner create a task to run browserify. We could still use watchify when using Grunt to automatically rebuild the files, but Grunt also comes with its own GruntWatch node module. This can be created as a grunt.loadNpmTasks()
Chrome won't give an option to clear the cache
While debugging an app on Chrome, discovered Chrome won't give any option to clear the cache. Even IE gives that option :-) but Google thinks it is the developer's problem of not clearing the cache and they won't support it. See this thread on Google code below. Seems to a lot of people that providing such an option may actually make Chrome look slower, so they aren't fixing it.
https://code.google.com/p/chromium/issues/detail?id=87604
https://code.google.com/p/chromium/issues/detail?id=87604
Subscribe to:
Posts (Atom)