Posted on

CICD – Castrol for DevOps

We started 2018 with multiple deployments during regular business hours for one of our clients – the same client that used to send out multiple emails to their users about system outage for a big bang deployment during off hours having their entire team in office for smoke testing …. A scenario that today seems prehistoric even though a few years old !

 

Welcome to the world of DevOps – continuing from our previous post on this topic ensure to put some Castrol on your engine or in tech speak – CICD on your deployment pipe.

Continuous Integration refers to a software development practice where programmers integrate and check-in their code frequently.

Continuous Delivery refers to a software development discipline where you build software in such a way that it can be released to production at any time.

Continuous Deployment means that every change you make gets automatically deployed through a deployment pipeline. This term is often confused with “Continuous Delivery”.

 

Traditionally we have seen that teams are more focused during Go Live ! In fact I can tell that apart from a few, most folks that will be affected due to the solution are not involved (or care much) till it is Live. One solution to this problem is to go live often … this solution is a challenge using traditional Big Bang GO Live model. It can be implemented by

  1. Reducing the feature list that it included in each release
  2. Automating the release process

You can get an idea of the first point outlined in this blog post.

The second point is where CICD is used.

There are MANY tools available that can get this done but in the end you need to ensure it is acceptable with your change control team (Audit, compliance, ‘server guys’ ! Etc…)

As we help our clients move to the cloud leveraging the container based deployments and transferring the compliance risks to the cloud service providers like Microsoft and Amazon we work with solutions that have ample support and integrate with the development tools. During our Microsoft Gold Certification we validated our CICD process using Azure. It integrates very well Visual Studio and deployments are permission controlled for each environments.

System Requirements for deploying an agent on Windows

To build your code or deploy your software you need at least one agent. When your build or deployment runs, the system begins one or more jobs. An agent is install-able software that runs one build or deployment job at a time.

There are two types of agents: Hosted Agents and Private Agents.

Hosted Agents: If you’re using Team Services, you have the option to build and deploy using a hosted agent. When you use a hosted agent, maintenance and upgrades are taken care of by the Teams Service. So for many teams this is the simplest way to build and deploy. The hosted agents are there in the hosted pool. If you need to run more than one job at a time, you’ll need to get more concurrent pipelines.

Private Agents: An agent that you set up and manage on your own to run build and deployment jobs is a private agent. You can use private agents in Team Services or Team Foundation Server (TFS). Private agents give you more control to install dependent software needed for your builds and deployments. After you’ve installed the agent on a machine, you can install any other software on that machine as required by your build or deployment jobs.

Before creating and deploying your agent, make sure that your machine is prepared with the Windows system prerequisites:

  • Windows 10 and Beyond (64-bit): No known system prerequisites are known at this time.
  • Windows 7 to Windows 8.1, Windows Server 2008 R2 SP1 to Windows Server 2012 R2 (64-bit): PowerShell 3.0 or higher
  • Visual Studio: Even though not technically required by the agent, many build scenarios require Visual Studio installed to get all the tools. It is recommended that you install Visual Studio 2015 or higher in your machine.

Prepare Permissions to Register Agent

Decide which user account you’re going to use to register the agent.

  1. Point your web browser to the visual studio URL: https://<your company>.visualstudio.com/ and key in the login credentials.
  1. From your home page, mouse-over your profile. Go to your security details.
  1. Create a personal access token (PAT) by clicking on “Add” under Personal Access Tokens:
  1. Key in the description, set “Expires In” option to 1 year and select <company> as “Accounts”.
  1. For the scope select “Agent Pools (read, manage)” and make sure all the other boxes are cleared.
  1. Click on “Create Token“.
  1. Copy the token that is generated and keep it in a notepad. You will use this token when you configure the agent.

 

Download and configure your agent

  1. Point your web browser to the teams services URL: https://<your company>.visualstudio.com/_admin/_AgentPool and click on “New Pool”.

 

  1. Key in pool name and click “Ok”.
  1. The pool thus created appears under the All Pools In this case we’ve created a new pool – Exigent-FJC.

 

  1. Select the pool name i.e. Exigent-FJC, and click Download Agent.
  1. On the Get agent dialog box, Windows remain selected by default.

 

  1. Click the Download button
  1. Follow the instructions on the page.

 

Creating Build Definition

A build definition is the entity through which you define your automated build process.

  1. In Team Explorer, select the project for which you want to create a new build definition. In this case we’ve selected– “MSTN”.
  1. Select “Build & Release” hub in your Team services project. The Builds tab will remain selected by default. Create a new definition by clicking on “New“.

 

  1. Start with an empty process.
  1. Specify whatever Name you want to use. For the Default agent queue, select Hosted.
  1. Click Save & queue, and then click Save.

 

Creating Tasks

A task is the building block for defining automation in a build definition. In the build definition, you compose a set of tasks, each of which performs a step in your build. The task catalogue provides a rich set of tasks for you to get started.

Different sets of tasks have to be created for Azure and non-Azure servers. Let us first understand the tasks to be created for Azure servers.

On the Tasks tab, click Add Task, and add the following tasks with their respective sets of parameters:

  1. NuGet restore: Click the —- category, click the NuGet restore task, and then click

 

  • Enter display name.

 

  • Path to Publish: Path to the folder or file you want to publish. The path must be a fully qualified path or a valid path relative to the root directory of your repo. Enter the following path: **/*.sln
  • Feeds to use: select the option “Feed(s) I selected here”.
  • Checkmark the option “Use packages from NuGet.org
  • Click on “Save & Queue” to save the task.
  • Visual Studio Build: Go to Build category, click the Visual Studio Build task, and then click

 

  • Enter the display name as you want it to be.

 

  • Solution: If you want to build a single solution, click the … button and select the solution. If you want to build multiple solutions, specify search criteria. You can use a single-folder wildcard (*) and recursive wildcards (**). For example, ***.sln searches for all .sln files in all subdirectories. Make the sure the solutions you specify are downloaded by this build definition.
  • Visual Studio Version: select Visual Studio 2015.
  • MSBuild Arguments: pass additional MSBuild arguments as shown in the screen shot above.
  • Platform: Specify the platform you want to build.
  • Configuration: Specify the configuration you want to build such as debug or release.
  • Once done, click on “Save & Queue” to save the task.
  • Index Sources & Publish Symbols: Go to Build category, click the Index Sources & Publish Symbols task, and then click

 

 

  • Path to publish symbols: If you want to publish your symbols, specify the path to the SymStore file share. This is an optional field. If you leave this argument blank, your symbols will be source indexed but not published.
  • Search pattern: Specify search criteria to find the .pdb files in the folder that you specify in Path to symbols folder argument. You can use a single-folder wildcard (*) and recursive wildcards (**). For example, **bin***.pdb searches for all .pdb files in all subdirectories named bin.
  • Path to symbols folder: The path to the folder you want to search for symbol files. If you leave it blank, the path used is Build.SourcesDirectory.
  • Once done, click on “Save & Queue” to save the task.
  • Publish Build Artifacts: Go to the Utility category, click the Publish Build Artifacts task, and then click

 

 

  • Enter the display name.
  • Path to publish: Path to the folder or file you want to publish. The path must be a fully qualified path or a valid path relative to the root directory of your repo. Typically you’ll specify $(Build.ArtifactStagingDirectory).
  • Artifact name: Specify the name of the artifact. For example: drop.
  • Artifact Type: Choose server to store the artifact on your Team Foundation Server.
  • Click on “Save & Queue” to save the task.

 

Creating Releases in Release Management

A release is the package or container that holds a versioned set of artifacts specified in a release definition. It includes a snapshot of all the information required to carry out all the tasks and actions in the release definition – like the environments, the task steps for each one, the values of task parameters and variables, and the release policies such as triggers, approvers, and release queuing options.

Releases can be created from a release definition in several ways:

  • By a continuous deployment trigger that creates a release when a new version of the source build artifacts is available.
  • By using the Release command in the UI to create a release manually from the Releases tab or the Builds tab.
  • By sending a command over the network to the REST interface.

Whatever be the approach of creating a release definition, please understand that the action of creating a release does not mean that it will automatically or immediately start a deployment.

Here, we’ll discuss creating release definition by continuous deployment trigger.

  • To create a release definition, choose Create Release Definition from the Release drop-down list.

 

  1. Start with an empty definition.

 

  1. Click Next. The project and build definition for which the release is being created will be selected by default. Click Create.

 

  1. Give your Release definition a name by clicking on the edit icon and click on Save.

 

  1. Once you save a release definition it shows up in the list of all Release Definitions.

 

Adding Environment

An environment is a logical and independent entity that represents where you want to deploy a release generated from a release definition. Dev, QA and Prod are some good examples of release environment. As new builds are produced, they can be deployed to Dev. They can then be promoted to QA, and finally to Prod. At any time, each of these environments may have a different release (set of build artifacts) deployed to them.

A release definition, by default, contains a single environment. You can however, configure additional to represent the target server(s) or locations where you will deploy your app.

  1. To create a new environment, edit the release definition. In this case, let’s edit MSTN Release.

 

  1. Click on Add environment and choose Create new environment.

 

  1. In the Add new environment dialog select a template for the new environment to automatically add appropriate tasks, or create an empty environment with no default tasks.

 

  1. Select the pre-deployment approval and trigger settings for the new environment. You can quickly select users or groups as pre-deployment approvers by typing part of the name.

 

  1. Choose Create and then edit the new environment name as required.

You can define approvers for each environment. When a release is created from a release definition that contains approvers, the deployment stops at each point where approval is required until the specified approver grants approval or rejects the release (or re-assigns the approval to another user).

 

 

You can define pre-deployment approvers, post-deployment approvers, or both for an environment.

The Automatic option allows you to approve a deployment automatically. When you select the Specific Users option, you can select one or more approvers for an approval step. You can add multiple approvers for both pre-deployment and post-deployment settings.

When you add multiple approvers, you can control how they can approve the deployment. The options are:

  • All users in any order: Choose this option if you want sign-off from a set of users, all of them must approve, and it does not matter in what order they approve.
  • All users in sequential order: Choose this option if you want sign-off from a set of users, all of them must approve, and you want them to approve in the specified order. For example, the second user can approve only after the first user approves, and so on.
  • Any one user: Choose this option if you want sign-off from only one of a set of users. When any one of the users approves, the release moves forward.

You can also specify account (user) groups as approvers. When a group is specified as an approver, only one of the users in that group needs to approve in order for the release to move forward.

After you have created and configured your environments, add tasks to them.

 

Adding Tasks

Select an environment in the definition and choose Add tasks.

 

By default, the task selector shows tasks generally used in a release definition. More tasks are available in the other tabs of the Task catalogue dialog. Add the following release tasks with their respective sets of parameters:

 

  • Azure App Service Deploy: Go to Deploy category, click the Azure App Service Deploy task, and then click

  

  • Click on the edit icon to enter your preferred display name.

 

  • Azure Subscription: Select the AzureRM Subscription. If none exists, then click on the Manage link, to navigate to the Services tab in the Administrators panel. In the tab click on New Service Endpoint and select Azure Resource Manager from the dropdown.
  • App Service Name: Select the name of an existing AzureRM Web Application. Enter the name of the Web App if it was provisioned dynamically using the Azure PowerShell task and AzureRM PowerShell scripts.
  • Deploy to Slot: Select the option to deploy to an existing slot other than the Production slot. Do not select this option if the Web project is being deployed to the Production slot. The Web App itself is the Production slot.
  • Virtual Application: Specify the name of the Virtual Application that has been configured in the Azure portal. The option is not required for deployments to the website root. The Virtual Application should have been configured prior to deploying the Web project to it using the task.
  • Package or Folder: Location of the Web App zip package or folder on the automation agent or on a UNC path accessible to the automation agent like, \BudgetITWebDeployFabrikam.zip. Predefined system variables and wild cards like, $(System.DefaultWorkingDirectory)***.zip can be also used here.

 

  • Visual Studio Tests: Go to the Test category, click the Visual Studio Test task, and then click

 

 

  • Test Assembly: Use this to specify one or more test file names from which the tests should be picked.
  • Paths are relative to the ‘Search Folder’ input.
  • Multiple paths can be specified, one on each line.
  • Uses the minimatch patterns

 

For example: To run tests from any test assembly that has ‘test’ in the assembly name, ***test*.dll. To exclude tests in any folder called obj, !**obj**.

 

  • Test Filter Criteria: Filters tests from within the test assembly files. For example, “Priority=1 | Name=MyTestMethod”. This option works the same way as the console option /TestCaseFilter of vstest.console.exe
  • Run Settings File: Path to a runsettings or testsettings file can be specified here. The path can be to a file in the repository or a path to file on disk. Use $(Build.SourcesDirectory) to access the root project folder.
  • Override TestRun Parameters: Override parameters defined in the TestRunParameters section of the runsettings file. For example: Platform=$(platform);Port=8080 Click here for more information on overriding parameters.
  • Code Coverage Enabled: If set, this will collect code coverage information during the run and upload the results to the server. This is supported for .Net and C++ projects only. Click here to learn more about how to customize code coverage and manage inclusions and exclusions.
  • Run in Parallel: If set, tests will run in parallel leveraging available cores of the machine. Click here to learn more about how tests are run in parallel. 
  • Cloud-based Web Performance Test: Go to the Test category, click the Cloud-based Web Performance Test task, and then click Add

 

 

  • Enter the display name.
  • VS Team Service Connection: It is the name of a Generic Service Endpoint that refers to the Team Services account you will be running the load test from and publishing the results to. Since it’s an optional field you may leave it blank.
  • Website URL: Enter the URL of the app to test.
  • Test Name: Enter a name for this load test. This particular name is used to identify it for reporting and for comparison with other test runs.
  • User Load: The number of concurrent users to simulate in this test. Select a value from the drop-down list.
  • Run Duration (sec): The duration of this test in seconds. Select a value from the drop-down list.
  • Load Location: The location from which the load will be generated. Select a global Azure location, or Default to generate the load from the location associated with your Team Services account.
  • Run load test using: Select “Automatically provisioned agents” if you want the cloud-based load testing service to automatically provision agents for running the load tests. The application URL must be accessible from the Internet.

Alternatively, select “Self-provisioned agents” if you want to test applications behind the firewall. You must provision agents and register them against your Team Services account when using this option.

  • Fail test if Avg. Response Time (ms) exceeds: Specify a threshold for the average response time in milliseconds. If the observed response time during the load test exceeds this threshold, the task will fail.

Once done, click on “Save” to save the tasks and the release definition.

Cloning Environment

A release definition often contains several environments such as development, testing, QA, and production. Typically, all these environments are fundamentally similar, and the techniques used to set up and deploy to each one are the same with the exception of minor differences in configuration for each environment and task (such as target URLs, service paths, and server names).

After you have added an environment to a release definition and configured it by adding tasks and setting the properties for each one, clone it to create another environment within the same definition. Select the environment you want to clone in the environments column, open the Add environment list, and choose Clone selected environment.

 

The cloned environment has the same tasks, task properties, and configuration settings as the original. The Add new environment dialog that opens lets you change the pre-deployment approval and trigger settings for the cloned environment.

Release Triggers

You can configure when releases should be created through release triggers in a release definition.

 

With certain types of artifacts specified in a release definition, enable Continuous deployment.

 

This setting instructs Release Management to create new releases automatically when it detects new artifacts are available. At present this option is available only for Team Foundation Build artifacts and Git-based sources such as Team Foundation Git, GitHub, and other Git repositories.

If you have linked multiple Team Foundation Build artifacts to a release definition, you can configure continuous deployment for each of them. In other words, you can choose to have a release created automatically when a new build of any of those artifacts is produced. You can further choose to create the release only when the build is produced by compiling code from certain branches (only applicable when the code is in a Team Services or a TFS Git repository) or when the build has certain tags.

You can also choose to have a release created automatically based on a schedule. When you select this option, you can select the days of the week and the time of day that Release Management will automatically create a new release. You can configure multiple schedules as required.

You can also combine the two automated settings and have releases created automatically either when a new build is available or according to a schedule.

See these steps in action here: 

 

 

 

Blog post by: Sam Banerjee. Reach Sam @ sam@medullus.com

Sam ensures Medullus’s drumbeat of execution is in rhythm (heads Operations!) – an IT professional with a myriad of experience across various platforms and domains with significant knowledge in the design, implementation and testing of various systems for organizations as ADP, Bristol-Myers Squibb & Ross Stores. With a Masters in Computer Science from SUNY, Sam leads the Tech innovations within Medullus (Artificial Intelligence, BlockChain, Mobility, BI).

Posted on

Demystifying Microsoft Gold Certification

Microsoft Gold Partner Medullus

Why we got Gold Certified

Certification gives a company much more than ‘bragging rights’  – more so if the path to achieving the certification includes a combination of examinations that team members need to clear along with existing clients validating the services of the team and a cost (investment) that the company need to make.

Microsoft Gold Certification fits the checklist above. There are various competencies that a company can chose to specialize in – we chose Application Development.

The ever-changing technology gamut is a constant challenge for anyone looking to design the next solution that he / she is tasked with.

Microsoft’s Gold Certification in Application Development encompasses a broad array of specialization, which ensures that the fundamentals are in place and the building blocks of any solution is as per best practices.

 

How we got Gold Certified

During our monthly company huddle in September, we announced to the team that we would need to be gold Certified in Microsoft’s Application Development area by end of the year. This will be a logical next step to our DevOps initiative taken earlier (read about my post on Road to DevOps ).

  • We listed the team members and the exam(s) each of them will need to clear
  • We pointed them to the study materials
  • We listed our existing clients that can give us the required testimonials
  • We put the $$ in our budget

Using methods of constant touch, encouragement and feedback from the programmers, we were able to clear the certification a week before! We worked with Microsoft to attain the proper procedures and in Feb, we were able to announce the following in our Monthly huddle.

Medullus Road to Microsoft Gold

 

Some of the Challenges we had to overcome

We are spread across a few locations with our Sales office in NY, Operations in NJ and development in Kolkata, India.  Communication within the company was not a challenge as we have been able to leverage collaboration tools like Slack and MS Teams well over time (Have you read Tej’s post on this topic yet ? – if not, here you go). The Challenge was working with Microsoft’s outsourced vendor Pearson VUE and ensuring that our members from all locations are working towards the same Company Certification. We had to document all incident ticket notes and enable that exams were registered and taken under the same company umbrella.

Another hurdle we faced was lack of proper communication on product version on which some exam questions will be based off, we took to the online community and Forums to get help – (if you are stuck on this please reach out to info@medullus.com or comment here and we will be glad to knowledge share).

Finally listing the company to the pinpoint – Microsoft has this ‘Work or School’ vs ‘Personal’ distinction (which in my mind is not required) – we worked with Microsoft Help (they are really good!) to make the required corrections and finally see.

Medullus Achieves Microsoft Gold Certification

 

Where Next

As we continue investing in our Labs doing some innovations in the Finance and Buying Group domain, using Blockchain and AI we will ensure to enforce most of the checklist items recommended by Microsoft, as they should be the basic building blocks for any solution.  Currently we are trying to work thru the latency effects of blockchain – as we push the limits of that technology we are constantly checking to ensure that security (and anonymity) is not compromised. The clearing of a few ‘difficult’ examination during the certification has put a ‘nitro boost’ in confidence for our team and we plan to leverage that even more as we march ahead …

 

 

Blog post by: Sam Banerjee. Reach Sam @ sam@medullus.com

Sam ensures Medullus’s drumbeat of execution is in rhythm (heads Operations!) – an IT professional with a myriad of experience across various platforms and domains with significant knowledge in the design, implementation and testing of various systems for organizations as ADP, Bristol-Myers Squibb & Ross Stores. With a Masters in Computer Science from SUNY, Sam leads the Tech innovations within Medullus (Artificial Intelligence, BlockChain, Mobility, BI).

Posted on Leave a comment

To App or not to App

Technical solutions for business problems are like ordering coffee in Starbucks – tall / Grande / Venti / Trenta, black / regular / fat-free / 1% / 2%, Sugar / sugar-free (I’ve always wondered why it is ‘free’ it is not free of ‘sugar’ neither is it free of cost !! but I digress ……

Technical solutions for business problems are like ordering coffee in Starbucks –

  • tall / Grande / Venti / Trenta,
  • black / regular / fat-free / 1% / 2%,
  • Sugar / sugar-free (I’ve always wondered why it is ‘free’ it is not free of ‘sugar’ neither is it free of cost !! but I digress …

the point here is  Options !! – As you can read in our previous blog post (https://medullus.com/questions-to-ask-before-developing-an-application/)

In today’s blog post we outline the decision to App or not to App –

Early this year we were tasked to develop a solution for a logistics vendor which was facing challenges in proving the conditions of pallets during delivery at the time each vehicle leaves the warehouse.

Their first attempt with another software vendor was via an app that would click pictures and upload them tagged with each item in their ERP.

The challenge they faced was the syncing was taking too long and in certain devices it would lock up and hang the process. Also, the devices in use were not standard – some were using Samsung tabs running on different Android versions others in the main location were on iPad(s) running on iOS.

At this point we were called up to ‘rescue’ them and deliver a packaged solution to the initial problem.

Our first approach was to see if we can salvage and reuse any of the existing efforts. To this end we asked 2 vital questions ….. One of which should have been asked at the initial decision point.

  • Why did they go the app route?
  • Will / Can a network connection be present in the warehouses at all times?

In response to the first question they envisioned a scalable app upon which more functionalities can be added in time.

The simple answer to the second question was ‘YES’ (and a high speed one too!)

With these we went back to drawing board and see if we can introduce asynchronous programming in the sync routine in order to free up the device forms while the sync was in progress (keep an eye out for our upcoming blog on Asynchronous programming!)

Though the async calls resolved the app hang issues they were still unable to roll out A single type of hardware to all the users in all their warehouses.

Even though the app was not hanging during the sync routine … the sync was taking a looooooong time! After further analysis we found the bottleneck to be at the database connection point of the ERP.

At this point we went ahead with a responsive web application having its own database as a solution … this would make it a device-agnostic approach.

Technology Used

  • HTML 5
  • MVC 4
  • ENTITY FRAMEWORK 5.0.0
  • BOOTSTRAP 3.2.0
  • JAVASCRIPT
  • SQL SERVER 2008R2 and above

 

The front end was developed in HTML5 using Bootstrap framework (https://getbootstrap.com/). The application will read from the ERP database (via a web service) in real-time and the uploaded pictures will be synced back using a database job that will run on the background every 4 hours (the closest dispatch location from any warehouse was 6 hours out and it took around 30 mins for the largest batch to sync … hence 4 hours was a good frequency level).

While some Android devices had the option to reduce image resolution the IOS devices lacked that feature in it’s kernel and it could not be achieved without using any third party solutions. In order to overcome this burden we wrote a compression engine which reduced the image size during upload to dB.

The code snippet of Image Handling and Save to SQL Server are shown below
note** – the error handling routines are omitted for ease of understanding!

Getting Images from Device Camera:

blog-app-getting

Cropping larger size Images:

blog-cropping

Saving Images to SQL Server:

blog-saving

END RESULT:

blog-app

 

Note: This app is flexible to any device that support HTML5

 


Sam Banerjee

samSam brings years of Business Intelligence and Software System Analysis experience to Medullus Systems. Prior to being a partner/co-founder at Medullus Sam lead several scale projects in the BI world in big name corporations like Bristol Myers, Frasenius, and ADP Payroll. Sam brings new ideas to improve BI in companies, products and projects. Sam is a certified Microsoft BI Developer and holds a Master in Computer Sciences. When asked about himself, Sam says “If you can’t measure it, you can’t manage it. For this reason alone, cutting edge software, that fits your business, needs to be on your radar screen and my cell phone number on your speed dial!” On his personal life Sam is a proud husband and father to 2 boys and enjoys his “cutting edge” drum-set, rock-shows and the New York Giants.

Posted on Leave a comment

Leverage the power of stored procedures

Abstract When tasked to develop an application we believe there are three options and you can pick any two ! Do it right Do it quick Do it cheap Option 1 should be a default but ‘right’ is a relative term!  And depending on experience if not chosen then it always has repercussions on maintenance.…

Abstract

When tasked to develop an application we believe there are three options and you can pick any two !

  1. Do it right
  2. Do it quick
  3. Do it cheap

Option 1 should be a default but ‘right’ is a relative term!  And depending on experience if not chosen then it always has repercussions on maintenance.

In this post I will demonstrate one way of doing it right. Occasionally I will refer to an application that my company recently and successfully developed using this approach, along with the reasons why it was successful.

Business Requirement

Most forecasting applications (like MRP or Asset Management) has an inadvertent need of creating orders … it could be either Sales Orders, Purchase Orders or Work Orders.

These orders are based on a group of setting. The screenshot below outlines the settings for a healthcare asset management product

blog-Leverage1

 

Some of the underlying requirements (asked during the design and analysis sessions) were

  • Settings will be added or updated frequently
  • Settings will be different for different entities (departments)
  • Business logic around each settings may be updated from time to time till
  • Performance needs to be optimized since real time orders are required.

Technical Solution (Database Only)

Given the requirements we encapsulated the entire business logic using stored procedures … instead of using In-Line SQL or LINQ. While Linq does have some advantages such as abstraction and support across multiple databases we went with the stored procedure route for

  • Ease of deployment – the code does not need to be compiled (and deployed) anytime a business logic needs a change (a key requirement).
  • Network traffic – sprocs need only serialize sproc-name and argument data over the wire while LINQ sends the entire query. This can get really bad if the queries are very complex… which is usually the case a continuously moving requirement as such.
  • Performance – using sprocs we can optimize the queries using HINTS, indices and other techniques to speed up the transactions. While this can be done in-line sql – testing each time will require a code compilation.
  • Maintenance – a set of stored procedures is a very easy way to inventory exactly what queries may be running on the system. Using in-line queries one needs to run a trace of that covers an entire business cycle, or parse through all of the application code.
  • Troubleshooting – error logging (in database tables) allows us to pin point the source of any issues and updating the logic is only a matter of updating the stored procedures instead of … well you get the picture J
  • Below depicts the database objects
    • Table for storing the settings

blog-Leverage2

  • Log table

blog-Leverage3

  • Stored Procedure – where all the business logic is written – currently at version 66 !

blog-Leverage4

Conclusion:

As you can see the sproc is under continual revisions. The decision to implement the solution using a sproc for business logic was the key. The sprints in agile process were quick and easy to roll-out.

Yes there are various ways to hold your nose … I’ve demonstrated a few below

blog-Leverage

 


Sam Banerjee

samSam brings years of Business Intelligence and Software System Analysis experience to Medullus Systems. Prior to being a partner/co-founder at Medullus Sam lead several scale projects in the BI world in big name corporations like Bristol Myers, Frasenius, and ADP Payroll. Sam brings new ideas to improve BI in companies, products and projects. Sam is a certified Microsoft BI Developer and holds a Master in Computer Sciences. When asked about himself, Sam says “If you can’t measure it, you can’t manage it. For this reason alone, cutting edge software, that fits your business, needs to be on your radar screen and my cell phone number on your speed dial!” On his personal life Sam is a proud husband and father to 2 boys and enjoys his “cutting edge” drum-set, rock-shows and the New York Giants.

Posted on Leave a comment

Questions to ask BEFORE developing an application

Desktop or Web Application ? To App or Not to App ? Platform (Windows, Linux etc…) Technology (Microsoft, Open Source, etc…) As you might know these are the pre-requisites before any sort of development can begin. Each option has its pros and cons and the correct answers depend on various factors. In this post I’m…

  1. Desktop or Web Application ?
  2. To App or Not to App ?
  3. Platform (Windows, Linux etc…)
  4. Technology (Microsoft, Open Source, etc…)

As you might know these are the pre-requisites before any sort of development can begin. Each option has its pros and cons and the correct answers depend on various factors. In this post I’m outlining how we collaborate with our clients to get the right answers.

Desktop or Web Application … Or Hybrid

Simply speaking, a desktop application is a computer program that runs locally on a computer device, such as desktop or laptop computer, in contrast to a web application, which is delivered to a local device over the Internet from a remote server.

Desktop applications have traditionally been limited by the hardware on which they are run. They must be developed for and installed on a particular operating system, and may have strict hardware requirements that must be met to ensure that they function correctly.

Web Applications are more device-agnostic. In order to use a web application the two essentials are (1) Web Browser (2) Internet connection.

While web applications boast of high availability it is not always desirable. Allowing users to access applications via web only does pose a security risks and given the recent attacks on corporations and government websites the best security protocols are not good enough.

In order to combat security and ease of development we often advice clients to go with a hybrid system where the client is furnished as a desktop application whereas the database is on the cloud.

Given the pros and cons of each application type we analyze the requirements with the clients asking them the following questions

  • How many users will be using the application?If the application has a high number of users then web application is preferable since the application needs to be tested on a handful of modern browsers in contrast to determining each users’ machine configuration (memory, disk space etc…).
  • Once completely developed, how often will changes be rolled out?If frequent changes need to be made live quick then once again web takes precedence over desktop application since deploying the changes to a server is quicker than deploying the changes to everyone’s desktop
  • In the rare event of an internet outage will the un-availability of the application be business critical?Before they say ‘OFFCOURSE’ to this answer we clarify situations and run thru scenarios. Recently we developed a Point Of Sale application for a hardware chain and one of their requirement was to be in business even without internet. In this scenario 100% web reliance is not feasible. We went the desktop route to enable off-line operations.

Even though web-application is out first preference it is not always the best option for the client. Determining the requirements and analyzing various scenarios enable us to decide which route to take.

In the upcoming posts I will dwell into platform, technology and other important considerations that enable the foundations of any custom developed application to be bullet-proof!

 


Sam Banerjee

samSam brings years of Business Intelligence and Software System Analysis experience to Medullus Systems. Prior to being a partner/co-founder at Medullus Sam lead several scale projects in the BI world in big name corporations like Bristol Myers, Frasenius, and ADP Payroll. Sam brings new ideas to improve BI in companies, products and projects. Sam is a certified Microsoft BI Developer and holds a Master in Computer Sciences. When asked about himself, Sam says “If you can’t measure it, you can’t manage it. For this reason alone, cutting edge software, that fits your business, needs to be on your radar screen and my cell phone number on your speed dial!” On his personal life Sam is a proud husband and father to 2 boys and enjoys his “cutting edge” drum-set, rock-shows and the New York Giants.

Posted on Leave a comment

Take the wait out of your application development

Solving current problems with yesterday’s process will hurt your business … BIG TIME!
Software development has improved exponentially over the last few years … not only in technology but mainly in the way it is executed.
There is no point talking about buzz words like Big Data, HTML5, and Mobility etc… If takes too long to use and see the improvement in your business.
Client engagement in design is the key to success in software projects.
At Medullus Systems, we use visualization tools to get feedback from the client (and users) regarding screen design within the first week of the project.
While we wait for the feedback the business logic and the backend work is underway using Rapid Application Development tools. This is possible using methods that distinctly separates the presentation layer (user screens), business logic layer and the data layer.
Integrating changes in design as per feedback from users are very easily and quickly integrated.
It is very common for requirements to be refined as users see the screens – the process outlined above identifies these changes and enable us to build them into the app without change requests after the project is completed!!
Making the project live in beta version allows most of the user base to comment on experience and suggest additional changes. During this process the old data (if any) is migrated (and mostly cleaned) using migration tools mastered by our data experts.
Finally we leverage the hosting partner’s scalable plans to increase hardware as the user base increases.

 


Roni Banerjee

roniRoni has 16 years of experience in leading small to large scale IT projects for various markets. Roni successfully founded 2 companies spanning multiple locations and time-zones. He rolls up his sleeves and gets into software development anytime you ask him and database development is his passion – we call him “our sequel junkie”! Roni has a Bachelor’s in Engineering, his very valued PMP and is close to finishing his Global MBA from the coveted Warwick Business School in the UK. When asked about his personal life he says “We, my wife and 2 boys, live in the picturesque Hudson Valley region of New York. A Yankees and New York Giants fan, I also enjoy strumming my guitars every day, mixing recipes from different cultures when I get some time and hack away during an occasional round of golf.”