Longest Ascending Sequence

I had a coding interview yesterday, for the first time in a long time, I let my nerves get the better of me, and when presented with an opportunity to code a solution, I wiffed on a simple problem, and frankly, couldn’t leave it alone, so when I got home, I had to solve it.

The request is simple, find the longest sequence of ascending numbers from an array, and return that sequence.  It’s essentially a test to see if you understand iteration, arrays, and can write logic that constructs and compares sequences.  Here’s the solution, I had come up with, (without the corrections outlined below):

Essentially, the code reads the array from left to right, building a new list every time it encounters a value that is less than the previous.  Then, looking at the entire collection of arrays, it’s easy to sort the collection, and grab the first longest sequence.

The Corrections

The gotcha is that the operator is wrong, and the difference in functionality could be easily overlooked.

Consider this sequence: 1,4,6,2,4,0,1,1,9,7

In this case, when using the less than operator, the sequence returned will be 0,1,1,9 as its the longest consecutive sequence of non-decreasing numbers.  Therefore, if we only want the longest consecutive sequence of ascending integers, line 18 is incorrect, and should actually be:

I’ll also note that my return statement could throw an exception because c1 could be null because I’m using FirstOrDefault(), so a simple null check can prevent that error:

Additional reading..

Finally, I did a little more research (google) on the challenge, and there’s a number of articles and publications that outline the problem, including a free ebook “Fundamentals of Computer Programming with C#” by Svetlin Nakov and Veselin Kolev.  see exercise # 5 on page 257 if your interested..

”We are what we repeatedly do. Excellence is not an act, but a habit.”

”We are what we repeatedly do. Excellence is not an act, but a habit.” – Will Durant

As an engineer, I look at the world with a philosophical bend at times.  I find wisdom in the works of men and women that have spent their lives in meaningful contemplation of the human experience.  I gravitated towards this quote as I think it applies to an attribute that many struggle with at times; a positive attitude.

In general, attitude contributes to our success, and ultimately to happiness.   How you act, what you say, and what you do speaks volumes about your character.  For example, imagine you have two co-workers.  One frequently says things are hard, and may not be possible.  The other says things are hard, but then offers suggestions on how it could be easier.   Generally, people will gravitate towards others looking for an thinking about solutions.

When a challenge comes my way, I want to find a solution.  Removing obstacles to progress is a habit. If my first reaction is “it can’t be done”, how long will it be before people stop looking to me for solutions?

Solutions may not always come easily, but if you’re looking for a solutions instead of excuses, you’re far more likely to be trusted, valued, and respected.

The Challenge of Data Quality

Recently attending a function, I overheard a conversation between a couple developers.  I was reminded of the implication of design choices for and the implications of data quality.  The conversation went something like..

“Hey, we need to increase the size of the data field so that our file output to [a partner] works correctly, right now, data is getting truncated.”

After a pause, his counter part said: “Yes, that sounds like a good idea, but the reason the field is truncated is that [a different partner] only allows 30 characters in that field, and we decided at the time to only allow that many.”

Thinking about it for a moment, they both slumped back realizing the challenge they now face.  Let’s face it, we’ve all made design choices that had long term implications that we “thought” were acceptable, and that’s just life as an engineer.  That said, it highlights the importance of data quality.

No matter how you want to look at data, it’s important that the data is as High Quality as you can get, which begs the question, how do you measure data quality?  With out making it a long drawn-out definition, here’s are the characteristics I look for based on what I’ve learned from others.

  • Completeness:  Is the data complete, for example, do you have an address that’s missing the postal code?
  • Validity:  Is the data collected in the proper format for it’s intended purpose?
  • Consistency:  Is the data collected consistent, or do you have different information for the same entity in different locations?
  • Timely:  Is the data still accurate, or is it sufficiently stale, that it is no longer accurate?
  • Accuracy:  Is the culmination of all the other characteristics in my view.

While not a comprehensive, these bullet points are a good starting point.  In reality, data quality is part of a much larger science, one that has caused many enterprises to develop data governance teams, whose soul purpose is the improve the quality of data, which makes a lot sense when you consider the value of data.

All that said, I urge anyone developing a distributed system to collect the best quality data possible, and perform validation on the incoming data.  Then you can deal with formatting the data on the outbound side.  as The old adage goes, Garbage In, Garbage out.

Using PowerShell to Save Time

Recently got a request to produce a report everyday that would be used for supporting a process. The report is only needed for a short time while we add more features. A change request is already underway, but our release cycle is long enough to require a daily task of creating a report.

I wanted a quick automated report that would give me the data I need, this way I wouldn’t be manually executing a query every day of the week.  The solution was a quick PowerShell script that exports the data to a CSV file, set up as a scheduled task.  I imported the SQL from a file in this case because the query is rather ugly..

Its a quick and simple solution, and saves 20-30 minutes a day, and most importantly, it only took 1/2 hour to implement. Since the release is at least two weeks away, I’ve saved myself at least 4 1/2 hours for other things.

The Virtual Media Center: Hardware

In continuing to document my virtual environment, I’ve decided to outline the hardware I’m using since it was the first question a friend asked. So here’s a quick breakdown of all the components, and some general comments about them second.  Bare in mind, most of the components are couple years old now, but the machine, despite my harsh treatment has held up remarkably well.

The components:

The machine was technically a rebuild that started in 2012, and as my primary PC for 3 years, it served me well.

The Asus M5A99FX is a solid motherboard, providing a USB 3.0 as well enough SATA connections to meet my needs.   If I were to build a newer version of an AMD system for this purpose, I would consider something similar, but a little newer of course.

The AMD FX8320 is also a solid performer, I had it overclocked at 4.2Ghz the majority of the time I was using as a desktop, and that included a brutal 24/7 regimen of muxing, transcoding, and gaming activity, and it was always on so that it could record my shows for my family.   FWIW, to keep it nice and cool while beating the heck out of it, I have a Cooler Master Hyper T4 cooler on it, a great upgrade from a stock cooler.

The Ballistix memory is pretty run of the mill variety memory, though again, a solid performer, no major concerns there.

The power supply from Cooler Master has been a great investment.  Prior to making the switch to this fully modular PSU, I didn’t really pay much attention to where cables where run in my case since I never saw them anyway.  However, with the modular cables combined with the CM Force case, I have cables run nicely out of the way allowing for maximum airflow.  At first this doesn’t seem relevant, but the cpu temperature dropped  5°C which is pretty significant.

*Technically, I have the V700, which is a little harder to find these days, but they are in many respects the same PSU.

My Adventure in Building a Virtual Media Center

Over the last few months, I’ve been having more and more difficulty with a computer that I built several years ago as a Media Center PC.  It’s become unstable for various reasons including failing drives, software installation and removal, upgrades, etc..  This has left me with a semi-usable machine, that ultimately isn’t reliable enough to set a program to record, and expect it to be available.  To exacerbate the situation, I have hours and hours of favorite shows and movies that are marked as Copy-Once protected.  While I support the rights of content producers and Digital Rights Management, it can be annoying when you lose entire seasons of your favorite shows.  Copyright issues put aside, I want a reliable way to continue to use Windows Media Center, despite the fact that for the moment, Microsoft appears to be abandoning it.

First things first.  I opted to build my VM Lab using VMware 6.0, installing it on the existing hardware.   Originally, I tried using the standard image from VMware, however, like many others using older hardware, I ran into driver problems.  So I did some digging and discovered ESXi-Customizer-PS Script, and opted to follow the instructions available in this video on YouTube.  Long story short, this allows you to built a custom ISO image adding drivers for older hardware.

Following the creation of my custom ISO image, I needed a way to install it on the hardware, and for this I used an  inexpensive USB stick (Lexar 8GB) along with Rufus 2.10, created my bootable flash drive.  Instructions on using Rufus for this exact purpose can be seen on youtube.

Now that I have VMware up and running my next step is to run through the creation of a fresh new VM.  If all goes well, I may even attempt a Physical to Virtual conversion, but let’s not get too far ahead…

Using SubRip files to Edit Video

Over the years, I’ve fallen victim to early adopter challenges.  I built my first DVR enabled PC in 2001.  I’ve continued to snub the traditional consumer market, preferring instead to go it alone, and have more control.   I have a lot of television recordings collected in numerous formats.  So large, that the ability to reduce my storage needs is important.  In fact, I started to delete recordings simply to save space.  Early on, because most recordings were in standard definition (SD), I eventually chose to burn video’s to DVD to save space.  Overtime, storing and organizing the library of DVD’s became cumbersome too.

I really didn’t like the fact that my recordings included “wasted” space storing commercials as well as the program.  So I tried some applications that were designed to automatically detect commercials and cut them out.  Early on, the results weren’t very good at all.  So I started working on a process that I could use to cut out commercials and keep only the portions of the video that I wanted.  This is really handy in reducing the size of football games for example…

What I wanted:

  1. A simple method to mark the start and end of clips fast and easily.
  2. Control over quality and container formats.
  3. Fast performance.
  4. The ability to edit without having to re-encode video.
  5. The ability to retain captions.

Continue reading “Using SubRip files to Edit Video”

Principle of least authority

Do you leave the door to your home wide open while you’re on vacation?  Probably not, and your IT systems are no different.  When it comes right down to it, security is a basic necessity, and following some simple principles will help you develop a meaningful security program.  

The principle of least privilege is a founding principle for secure systems.  Simply put, the principle of least privilege means that you must only give people the bare minimum access required to perform their duties.  To make this possible, you have to understand what it is that users need.

While least privilege is an accepted core tenant of information security, it may not be well understood by business users, and is often met with some resistance. Here are some general tips on how implement it successfully.

#1 Develop a complete perspective..
To start, you must understand what motivates the business. Ask the obvious questions of yourself first.  What is the potential risk to the business if a system is misused?  Define the security principals that are most relevant, personally, I often refer to the CIA Triad, Confidentiality, Integrity, and Availability.

#2 Get the stakeholders on board..
The capabilities of any enterprise system are implemented to support business activities. Our goal is to get the job done with the least amount of risk possible. To understand and mitigate the risk, identify your stakeholders right away.. They are best equipped to articulate what their departments need.  If your system is used by HR, Finance, and Operations, then you need to engage the leaders from each area. If necessary, engage your executive team to get buy-in..

#3 Users are assigned roles not privileges..
Allocating access to roles is far easier to manage than assigning access at the user level. It’s also a smart way to prevent access creep, where overtime, a person gains more and more access. Additionally, its easier to manage when people move around in the organization.

#4 Create a management process.
Define a procedure that is easy to follow, and train everyone to follow it.  Make changes traceable, and I strongly encourage working with stakeholders to develop an acceptable approval process. This creates accountability, helps reinforce good behavior.

#5 Review early and often.
Following your management process, schedule a reviews.  Check to make sure that roles and access still match.  I suggest quarterly reviews in high risk organizations, but at a minimum, reviews should be conducted annually. This helps you stay on track with your stated business requirements.

#6 Make it easy
The most common problem with changes to an environment is that it becomes too hard to get access to a resource when it’s needed, people will continually request more access than is necessary just in case..  Remember to keep is short and simple, a process that is quick and easy is a key factor to success.

Final Thoughts
Remember that a holistic view  of security at multiple levels is important. You may need to have a perspective of security from clients accessing the network, application accessing a database, and users accessing the applications.. However, a simplistic view of each independently allows actionable change.  If you attempt apply these principals to all layers simultaneously, you may find it increasingly difficult to manage the scope…

Perhaps in a future post, I’ll provide a more technical explanation along with examples that demonstrate the application of least privileged principles using a real world scenario.