Wednesday, May 6, 2020

ELK Integration with Back End application

ELK Integration with Back End application


WEB API -

  1. Install package Elastic.Apm.NetCoreAll [Ref: https://www.nuget.org/packages/Elastic.Apm.NetCoreAll]
  2. Update the appsettings.json file adding the following configuration to accommodate Elastic APM integration:
"ElasticApm": {
"SecretToken": "apm-server-secret-token",
"ServerUrls": "http://ec2-32-75-437-265.ap-southeast-2.compute.amazonaws.com:8200",
"ServiceName": "
Your_API" //allowed characters: a-z, A-Z, 0-9, -, _, and space. Default is the entry assembly of the application
}


SecretToken - APM server secret token.

ServerUrls - This is the server URL where the ELK is setup and the metrics data will be pushed.

ServiceName - This is the name of the service which is sending the metric data. It will be used in the ELK server APM module as shown in the screenshot below:

3. Modify Startup.cs - Add the "using Elastic.Apm.AspNetCore;” in the using section and add “ app.UseElasticApm(Configuration);” in Configure method as shown in the screenshot below:















Windows Service (.NET Core)

  1. Install package Elastic.Apm [Ref: https://www.nuget.org/packages/Elastic.Apm]
  2. Install package Elastic.Apm.EntityFrameworkCore [Ref: https://www.nuget.org/packages/Elastic.Apm.EntityFrameworkCore]
  3. Public API mechanism is used for capturing the APM metrics in case of Windows service.
  4. Copy the following code and paste it at the starting of the method from where you want to start capturing the metrics. Provide the values of the data marked in BOLD as per your service.

Environment.SetEnvironmentVariable("ELASTIC_APM_SECRET_TOKEN", "your secret token");
Environment.SetEnvironmentVariable("ELASTIC_APM_SERVER_URLS", "elastic server url");
Environment.SetEnvironmentVariable("ELASTIC_APM_SERVICE_NAME", "YourServiceName");
Agent.Subscribe(new EfCoreDiagnosticsSubscriber());

var outgoingDistributedTracingData = (Agent.Tracer.CurrentSpan?.OutgoingDistributedTracingData ?? Agent.Tracer.CurrentTransaction?.OutgoingDistributedTracingData)?.SerializeToString();
var transaction = Agent.Tracer.StartTransaction("YourServiceTransactionName", ApiConstants.ActionExec, DistributedTracingData.TryDeserializeFromString(outgoingDistributedTracingData));
try
{
Process();
}
catch(Exception ex)
{
transaction.CaptureException(ex);
}
finally
{
transaction.End();
}


  1. Agent.Subscribe(new EfCoreDiagnosticsSubscriber()); - It allows you to capture the EF core metrics and you have to provision it separately in case of Windows Service.
  2. Process() - It is the method you want to send metric data for.
  3. transaction.CaptureException(ex) - This allows to capture the errors and send it to the APM server.
  4. transaction.End() - It is required to end the transaction.





Once it will be done then on running the service it will start sending data to APM server



Thursday, July 5, 2018

“Port 4200 is already in use” when running the ng serve command

Windows 

  1. Command Window(cmd) - for /f "tokens=5" %a in ('netstat -ano ^| find "4200" ^| find "LISTENING"') do taskkill /f /pid %a
  2. GitBash - netstat -ano | findstr :4200; taskkill -PID -F

Mac OS X & Ubuntu

  1. sudo lsof -t -i tcp:4200 | xargs kill -9

Tuesday, June 19, 2018

Common Utility Class

Boilerplate code for Common Class to group the required functions, needed quite often.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Management;
using System.Security.Cryptography;
using System.Text;
using System.Xml;
using System.Xml.Serialization;
using static System.String;

namespace Common
{
    public static class CommonUtils
    {
        public static ICollection GetExternalHardDiskInfo()
        {
            ManagementObjectSearcher searcher = new ManagementObjectSearcher("SELECT * FROM Win32_DiskDrive");
            ICollection lstExHDD = new List();

            foreach (ManagementObject wmi_HD in searcher.Get())
            {
                HardDrive hd = new HardDrive
                {
                    Model = wmi_HD["Model"].ToString(),
                    Type = wmi_HD["InterfaceType"].ToString()
                };
                if (wmi_HD["SerialNumber"] != null)
                    hd.SerialNo = wmi_HD["SerialNumber"].ToString();

                lstExHDD.Add(hd);
            }
            return lstExHDD;
        }

        public static string CalculateMD5Hash(string FilePath)
        {
            using (var md5 = MD5.Create())
            {
                using (var stream = File.OpenRead(FilePath))
                {
                    var result = Convert.ToBase64String(md5.ComputeHash(stream));
                    return result;
                }
            }
        }

        public static DateTimeOffset GetAESTDateTimeOffset()
        {
            var utc = new DateTimeOffset(DateTime.UtcNow);
            var aest = TimeZoneInfo.FindSystemTimeZoneById("AUS Eastern Standard Time");
            return TimeZoneInfo.ConvertTime(utc, aest);
        }

        public static string GetAESTDate(string format)
        {
            var dateTimeOffset = GetAESTDateTimeOffset();
            return dateTimeOffset.LocalDateTime.ToString();
        }

        public static string FormatByteSize(long i)
        {
            // Get absolute value
            long absolute_i = (i < 0 ? -i : i);
            // Determine the suffix and readable value
            string suffix;
            double readable;
            if (absolute_i >= 0x1000000000000000) // Exabyte
            {
                suffix = "EB";
                readable = (i >> 50);
            }
            else if (absolute_i >= 0x4000000000000) // Petabyte
            {
                suffix = "PB";
                readable = (i >> 40);
            }
            else if (absolute_i >= 0x10000000000) // Terabyte
            {
                suffix = "TB";
                readable = (i >> 30);
            }
            else if (absolute_i >= 0x40000000) // Gigabyte
            {
                suffix = "GB";
                readable = (i >> 20);
            }
            else if (absolute_i >= 0x100000) // Megabyte
            {
                suffix = "MB";
                readable = (i >> 10);
            }
            else if (absolute_i >= 0x400) // Kilobyte
            {
                suffix = "KB";
                readable = i;
            }
            else
            {
                return i.ToString("0 B"); // Byte
            }
            // Divide by 1024 to get fractional value
            readable = (readable / 1024);
         
            // Return formatted number with suffix
            return readable.ToString("0.### ") + suffix;
        }

        public static string ConvertBytesToMegabytes(long bytes)
        {
            return $"{(bytes / 1024f) / 1024f}";
        }

        public static string GetKeyFromFile(string FilePath)
        {
            if (IsNullOrWhiteSpace(FilePath))
                return Empty;

            var result = FilePath.Substring(FilePath.IndexOf(@"\", 0) + 1, FilePath.Length - (FilePath.IndexOf(@"\", 0) + 1)).Replace("\\", "/");
            return result;           
        }

        public static bool FolderMatch(string[] SourceFolderArray, string[] TargetFolderArray)
        {
            bool result = !TargetFolderArray.Except(SourceFolderArray).Any();           
            return result;
        }

        public static string ConvertClassToXML(Object ClassObjectToConvert)
        {
            string xmlEncodedList;
            using (var stream = new MemoryStream())
            {
                using (var writer = XmlWriter.Create(stream))
                {
                    new XmlSerializer(ClassObjectToConvert.GetType()).Serialize(writer, ClassObjectToConvert);
                    xmlEncodedList = Encoding.UTF8.GetString(stream.ToArray());
                }
            }
            return xmlEncodedList;
        }

        public static long CalculateLocalFilesListSize(List LocalFilesList)
        {
            long totalFileSize = 0;
            FileInfo f = null;
            LocalFilesList.ForEach(x =>
            {
                f = new FileInfo(x);
                totalFileSize += f.Length;               
            });
           
            return totalFileSize;
        }

        public static IEnumerable GetAllFilesFromPath(string FilePath)
        {
            if (Directory.Exists(FilePath))
            {
                var query = new DirectoryInfo(FilePath).GetFiles("*.*", SearchOption.AllDirectories);
                var result = query.Select(x => Path.GetFullPath(x.FullName));
                return result;
            }
            return null;
        }
    }
}

Sunday, May 21, 2017

Getting the string between two repeating characters in SQL

There could be a need to get the string which is in between the repeating characters in the given parent string.

Example - "/a quick silver/fox jump over the/lazy dog"
Part 1 - a quick silver
Part 2 - fox jump over the
Part 3 - lazy dog

Problem: To get the Part1, Part 2, Part 3

DECLARE @test varchar(300)
SET  @test  = '/a quick silver/fox jump over the/lazy dog'

--Getting Part 1
SELECT SUBSTRING(@test, CHARINDEX('/', @test)+1 , CHARINDEX('/', @test, CHARINDEX('/', @test)+1) - (CHARINDEX('/', @test) + 1))

--Getting Part 2
SELECT SUBSTRING(@test, (CHARINDEX('/', @test, CHARINDEX('/', @test)+1)+1), ((CHARINDEX('/', @test, CHARINDEX('/', @test, CHARINDEX('/', @test)+1)+1))-(CHARINDEX('/', @test, CHARINDEX('/', @test)+1)+1)))

--Getting Part 3
SELECT SUBSTRING(@test, (CHARINDEX('/', @test, CHARINDEX('/', @test, CHARINDEX('/', @test)+1)+1)+1), LEN(@test))


Hope it will help my developer community.

Thursday, April 6, 2017

Query to get all procedures which may have NOEXPAND hint

SELECT * FROM sys.Procedures WHERE is_ms_shipped = 0 
AND OBJECT_DEFINITION(object_ID) LIKE '%NOEXPAND%'

Monday, July 4, 2016

Error message “No exports were found that match the constraint contract name”

Last Friday, I faced a problem while opening my Visual Studio solution, and when I tried to open the solution file, error message that I got was:

No exports were found that match the constraint contract name

Solution - Select the appropriate folder as per your VS installation and delete all the files from the folder "ComponentModelCache" and then restart the Visual Studio. 

For 

Visual Studio 2012 
C:\Users\[username]\AppData\Local\Microsoft\VisualStudio\11.0\ComponentModelCache

Visual Studio 2013
C:\Users\[username]\AppData\Local\Microsoft\VisualStudio\12.0\ComponentModelCache

Visual Studio 2015
C:\Users\[username]\AppData\Local\Microsoft\VisualStudio\14.0\ComponentModelCache

Hope it will help you.

Friday, October 2, 2015

An exception of type 'System.Data.ProviderIncompatibleException' occurred in EntityFramework.dll but was not handled in user code

You will encounter this error when either your connection string is not correct for the Entity Framework or your SQL Server agent is not running. Check the web.config for the rectify the connection string or check the services (Windows button + R ---> services.msc), if SQL Server instance service is running or not.

Wednesday, March 11, 2015

Memcached, ASP.NET Cache and Varnish

Couple of thoughts while reviving the experience I had with ASP.NET cache, Memcached and Varnish. Memcached and Varnish are the two different caching mechanisms and cannot be compared with each other, though their ultimate goal is to decrease the response time taken by the server to cater the received request. 


Following is some insight in their capabilities:


Varnish is a server that runs as a reverse proxy in front of the real webserver (apache, nginx, etc.) and it stores the response of the server separately in memory and could decide to serve it for a subsequent request without passing the request to the backend (the webserver in this context), so simply it's like HTML caching. Varnish saves your dynamic web server from CPU load by making you generate pages less frequently (and lightens the DB load a bit as well). It is focused on HTTP acceleration and it works for unauthenticated traffic, via cookie.


Memcached is distributed caching mechanism to cache data which means that if I have a cluster of servers accessing the cache, all of them are essentially reading from and writing to the same cache. Once an entry is placed in the cache all machines in the cluster can retrieve the same cached item. Invalidating an entry in the cache invalidates it for everyone. It saves DB from doing a lot of read work. Memcached will cache authenticated traffic. 


Comparing .NET cache with Memcached, the disadvantage is that accessing a memcached cache requires inter-process/network communication, which will have a small performance penalty over the .Net caches which are in-process. Memcached works as an external process/service, which means that you need to install/run that service in your environment. Again the .Net caches don't need this step as they are hosted in-process. In case of distributed environment, however, out of process caching solution like Memcached is far scalable than in process one like ASP.NET cache. 


Note: Generally, websites do deploy both Varnish and Memcached. Varnish, to speed up delivery of its cache hits, and when there is a cache miss the application server might have access to some data in Memcached which will be available to the application faster than what the database is capable of. 


Thursday, October 2, 2014

Development with Amazon DynamoDB in C#


In this post I will put on important things you should know before starting development on the Amazon DynamoDB in C#. Following are briefly described concepts:

Table - A table is more or less the same thing you already know and it needs a name and a hash key to be created.

Hash Key - A hash key is like a primary key. You can optionally specify a range key, so by pairing the hash and range keys you must get unique items.

An item is like a record but only the hash key (and the range key if defined) is required, so every item can have a different data structure. 

DynamoDB is a database service in the cloud, so you can’t use Sql statements in order to manipulate data, Query and Scan operations are the way to do it. The Query operation is basically a get having the ID (or hash key) of the item you want to retrieve (that’s why it’s important to choose IDs carefully). By using the Query you can optionally specify the range key. The query has better performance over the Scan. Scan is used in the same way except by the fact that you can specify any attribute as a search condition; hence the table must be full scanned, decreasing the performance. You can use the Query and Scan operations in two ways: one is by retrieving a list of attribute names and values (so you have to read/parse them later) as it’s illustrated in the following sample (note that the simple Query operation uses the hash key only).

List> foo(string hashKeyValue, string tableName)
 {
       QueryRequest queryReq = new QueryRequest().WithHashKeyValue(new        
       AttributeValue().WithN(hashKeyValue));
       queryReq.WithTableName(tableName);

       QueryResponse response = Client.Query(queryReq);
       return response.QueryResult.Items;
 }

The second way to use the Query and Scan operations is by decorating your classes and letting the Amazon API to parse the attributes names and values for you. In this mode you have to add the following attributes to your class:
  • Table Name attribute “[DynamoDBTable("YourTableName")]”: this attribute must be located on top of each type being persisted in DynamoDB and it has the same effect that the tableName variable of the first sample.
  • Hash key and/or hash range attributes “[DynamoDBHashKey]”: this attribute must be located on top of the desired variable being persisted as part of a class.
After properly decorating your class you can use the Query operation as seen in the following sample.

public IEnumerable Query(object key)
{
      return context.Query(key);
}


Querying and Scanning DynamoDB tables are easier by decorating. Here are some issues or restrictions that I have found and probably they can help you in the process of using DynamoDB:
  • DynamoDB does not allow null or empty string attribute values.
  • If you fail to decorate the class you’ll receive a “No DynamoDBTableAttribute on type” exception.
  • If you receive a creepy “The start tag does not match the end tag” exception maybe it’s because you’re behind of a firewall, so you can use any tool like WireShark to make sure the requests can reach the Amazon server.
  • If you’re wondering how to work without SQL don’t worry, double check the Query sample using decorating and you’ll see that an IEnumerable collection is returned, so you can use Linq (and the Sum, Max, Avg of course)

Coding Principles To Follow


The other day I was having discussion with one of my colleague who was frustrated after doing a code review and he was of the opinion that developers do take it lightly to code as per standards; however, in agile model there is lot of pressure on them to deliver in time, leaving the poor code quality. To certain extent, I do agree with developer version however in long term this practice produces lot of serious issues about maintainability, performance and also leaves a technical debt to be borne by the application when it goes to production.

A bad design has following 3 characteristics: 
  • Rigidity - It is hard to change because every change affects too many other parts of the system.
  • Fragility - When you make a change, unexpected parts of the system break. 
  • Immobility - It is hard to reuse in another application because it cannot be disentangled from the current application.
Whenever we code or design we shall keep the following principles in mind which may make our lives simpler:

Open Close Principle
  • Software entities like classes, modules and functions should be open for extension but closed for modifications.
When writing classes, make sure that when you need to extend their behavior you dont have to change the class but to extend it. The same principle can be applied for modules, packages, libraries. If you have a library containing a set of classes there are many reasons for which you'll prefer to extend it without changing the code that was already written (backward compatibility, regression testing). This is why we have to make sure our modules follow Open Closed Principle. When referring to the classes Open Close Principle can be ensured by use of Abstract Classes and concrete classes for implementing their behavior. This will enforce having Concrete Classes extending Abstract Classes instead of changing them. Some particular cases of this are Template Pattern and Strategy Pattern.

  • High-level modules should not depend on low-level modules. Both should depend on abstractions.
  • Abstractions should not depend on details. Details should depend on abstractions.
Dependency Inversion Principle states that we should decouple high level modules from low level modules, introducing an abstraction layer between the high level classes and low level classes. Further more it inverts the dependency: instead of writing our abstractions based on details, the we should write the details based on abstractions.
Dependency Inversion or Inversion of Control are better know terms referring to the way in which the dependencies are realized. In the classical way when a software module(class, framework) need some other module, it initializes and holds a direct reference to it. This will make the 2 modules tight coupled. In order to decouple them the first module will provide a hook(a property, parameter) and an external module controlling the dependencies will inject the reference to the second one.
By applying the Dependency Inversion the modules can be easily changed by other modules just changing the dependency module. Factories and Abstract Factories can be used as dependency frameworks, but there are specialized frameworks for that, known as Inversion of Control Container.

  • Clients should not be forced to depend upon interfaces that they don't use.
This principle teaches us to take care how we write our interfaces. When we write our interfaces we should take care to add only methods that should be there. If we add methods that should not be there the classes implementing the interface will have to implement those methods as well. For example if we create an interface called Worker and add a method lunch break, all the workers will have to implement it. What if the worker is a robot?
As a conclusion Interfaces containing methods that are not specific to it are called polluted or fat interfaces. We should avoid them.

  • A class should have only one reason to change.
In this context a responsibility is considered to be one reason to change. This principle states that if we have 2 reasons to change for a class, we have to split the functionality in two classes. Each class will handle only one responsibility and on future if we need to make one change we are going to make it in the class which handle it. When we need to make a change in a class having more responsibilities the change might affect the other functionality of the classes.

  • Derived types must be completely substitutable for their base types.
This principle is just an extension of the Open Close Principle in terms of behavior meaning that we must make sure that new derived classes are extending the base classes without changing their behavior. The new derived classes should be able to replace the base classes without any change in the code.