Functional programming has been around since the late 1950’s, based on lambda calculus concepts dating back 100 years. It has been embraced by just about every modern language, because of guarantees it offers. Immutability is one property of functional programming, where functions never change a variable, but instead return a new one. When an function […]Read more "Computer Architecture is Non-Functional"
My earlier Faster Sorting in C# blog described a Parallel Merge Sort algorithm, which scaled well from 4-cores to 26-cores, running from 4X faster to 20X faster respectively than the standard C# Linq.AsParallel().OrderBy. In this blog, I’ll describe an even faster Parallel Merge Sort implementation – by another 2X. Performance of the New Approach C# […]Read more "Even Faster Sorting in C#"
What Is It? Computing constantly provides space-time trade-offs. To make an algorithm faster, more space can be used. Or, if space is at a premium, then a slower and more space efficient algorithm can be used. This kind of a trade-off occurs in every aspect of computing, such as software development and chip design. In […]Read more "To In-Place or To Not-In-Place"
The number of cores in modern processor is growing. For quite a while we were stuck at 2 or 4 cores. But, lately the number of cores have grown to 6 and 8 even for consumer processors, and to 14 cores for Intel Xeon workstation and cloud CPUs, and 32 cores for AMD desktop and […]Read more "Parallelism: Perform No Worse"
I wrote a blog “Faster List.ToArray() and Copying in C#” a while back, which showed several ways to copy from List to Array faster. One of these ways was implemented in the HPCsharp nuget package and increased performance by 4X using multiple processor cores. In this blog I’ll show how to do the same for […]Read more "Faster Array.ToArray() and Copying in C#"
Standard deviation is one of the basic tools within a statistician’s toolchest, to measure variability within a data set. The basic formula can be found on wikipedia. Within the formula, one of the summations takes every data point subtracts the average value and then squares it. This blog will explore how this squaring affects standard […]Read more "How Standard Deviation Measures Warped Data"
Standard deviation is a core statistical algorithm used to measure variability of a data set. It is used in data science extensively, to provide useful information about the data. The computation itself uses summation twice within the algorithm: once to compute the mean (average) of the data set, and another to sum the square of […]Read more "Parallel Standard Deviation"
Computer system memory is thought of as RAM – Random Access Memory. Originally, this meant accessing any random location took the same amount of time. This is true of certain kinds of memory types, such as Static RAM – SRAM. In this section we’ll discuss how the current computer system memory has significantly deviated from […]Read more "Memory Access"
This blog is a slightly different kind of a cheat sheet. It is based on common tasks that we do using git command line. Hopefully, this blog will reduce your memorization overload. To Clone a Repository git clone https://repositoryName.git grab the repository.git part from the git web UI for cloning a repository. This command will […]Read more "Git by Task"
I’ve taken several attempts at parallelizing the LSD Radix Sort algorithm. This blog provides the key concepts and details about the latest attempt, which has succeeded and is beginning to pay off dividends of higher performance. Performance Summary The following table shows performance of three variations of the LSD Radix Sort on two different multi-core […]Read more "Parallel LSD Radix Sort"