• Vladimir's Newsletter
  • Posts
  • Launching Kerno on Hacker News / Product Hunt | Discussing IIoT and Industrial Protocols / HighByte on Manufacturing Hub | Visiting OnLogic in Vermont

Launching Kerno on Hacker News / Product Hunt | Discussing IIoT and Industrial Protocols / HighByte on Manufacturing Hub | Visiting OnLogic in Vermont

Launching Kerno on Hacker News / Product Hunt | Discussing IIoT and Industrial Protocols / HighByte on Manufacturing Hub | Visiting OnLogic in Vermont

Table of Contents

Well, let’s get right into it.

As I had mentioned in the previous newsletter (Insert last week), this week we’ve focused a lot of our efforts on launching Kerno. As I had specified, our goal was to release on Hacker News on Tuesday and on Product Hunt on Wednesday. The thought was that there was little we could do to game either platform, but we knew that Product Hunt was extremely popular on Tuesdays and Thursdays. Therefore, the Tuesday Hacker News launch was the trial for the Product Hunt launch happening on Wednesday.

So what actually happened?

We delayed the launch on Hacker News to Wednesday as we wanted to take the time to validate the post.

We got “flagged” - the equivalent of a ban on Hacker News.

Hacker News - Kerno Getting Flagged

How did we get banned? Well, we all knew that there’s little we could do to “game” Hacker News. The platform is coded in a way that prevents users from using “extra” accounts, using brand new accounts to like / answer / share the post.

Most of our team had fresh accounts. Naturally, they had also shared the post with their friends and family members in the hope that they’d like and support our release. We assumed that we’d naturally reach users, and thus, a few of our supporters wouldn’t sway the post in either direction.

We were wrong.

The post was banned about 3 hours after being posted.

The Product Hunt Launch

Our Product Hunt launch went much smoother. Since we had moved the launch on Hacker News to Wednesday, we decided to push our release on Product Hunt to Thursday. Here’s the top of the launch page!

Kerno Launch on Product Hunt

In order to secure the spot on the weekly Product Hunt newsletter, the product must reach one of the top 5 spots. In other words, if you’re not popular enough, you’re going to miss out on additional traffic from being mentioned in the newsletter.

Did we reach a top 5 spot?

No, we didn’t… We came in 13th for the day, and 67th for the week, as shown below:

Kerno Rankings on Product Hunt

Key Takeaways from a Product Launch

So, what’s my take on the product launch? Was it all worth it?

I was hoping that the answer would be much more confident; the truth is that I’m unsure what we could have achieved as I’ve had no prior experience with a launch to use as a benchmark. In this particular case, we’ve not secured a top spot, but we did get quite a few interested contacts that scheduled a conversation next week. Our website has a flow which collects an email and sends a brief sequence that allows the user to book a time to chat with us. Long story short, we have 5 meetings booked this week.

The Technical Opportunity

As an engineer who understands what it takes to build software, I’d say that our launch was a success. What wasn’t seen by the “public” is that the technical team at Kerno was frantically polishing up various features as the launch day approached. From my experience on a variety of projects, engineers tend to leave a lot for the last minute - it’s human nature. That being said, the pressure of the anticipated launch got everyone into gear. Although many complained, a lot was built and delivered for the launch - multiple screens were finalized, bugs were eliminated, and a stable build was released. I’ll spare you all the details as we have a walkthrough of the recent features on the website - Public Beta Release June 28, 2024

Despite releasing a stable build, the crunch brought out a few flaws in our own development cycle. I believe that we have a need for a member of the team to handle product. We’re beginning to have a lot of moving parts that need to be managed and an ever extending list of features we need to finalize. Without a “clearer” path, I believe that we won’t deliver everything we’re promising while simultaneously building quality code.

Organizational Opportunity

A properly structured launch could greatly benefit both sides of the organization (business and technical) by bringing in leads and allowing the technical team to draw a line in the sand for a specific version release. However, a successful product launch hinges on the team’s ability to communicate clearly and lay out an articulate plan.

It’s important to note that I’m a big believer in the role morale plays in all of our endeavors. In this instance, it’s critical to recognize that the entire team is extremely stoked for the success of the launch. The slightest of hiccups that may seem insignificant to some can easily bring uncertainty and doubt in the eyes and minds of others. What I’m trying to get at is that, in general, a launch has the opportunity to boost morale if successful and diminish it when it’s not. Managing individuals expectations and conducting a clear post-mortem regardless of the outcome is key.

This was my post mortem…

Talking HighByte on Manufacturing Hub

On Wednesday, Dave and I spoke to Aron Semle from HighByte. I’ve been paying close attention to their solution for a while. They’re part of various discussions around industrial DataOps, UNS (Unified Namespace), Industry 4.0, IIoT, and many other topics around industrial data. I’ve done a lot of work in industrial data and understand the challenges and opportunities for end-users extremely well. That being said, my knowledge of HighByte is limited - I’ve mainly worked with custom solutions, Inductive Automation’s Ignition, FactoryTalk View SE, and a few others when it comes to industrial data and MES / ERP solutions. I was thus very interested in learning more about the product, to understand the vision Aron has for the future and his viewpoint as to what they’re solving for the end-users.

The Challenges of Industrial Data

There are many differences between what we see in IT and what’s happening in OT. Industrial data presents various challenges for end-users. In this section, I’ll do my best to outline some of those challenges based on my experience. I’ll also propose a few approaches to solving some of those challenges.

Industrial Protocols

Examples of Industrial Protocols - MQTT, ProfiBUS, ProfiNet, DeviceNet, OPC, Modbus, EtherNet/IP

Manufacturing is notorious for having dozens of different protocols. We typically install equipment that is meant to run for decades. It’s common to see PLCs, HMIs, and other automation devices installed in the 1980s (sometimes 1970s) on the current manufacturing floors. For obvious reasons, these devices didn’t have the same communication protocols we have today - they weren’t able to communicate via Ethernet, or any meshed network for that matter. Therefore, engineers and OEMs have created ways to interface with those devices in an effort to collect data. This typically means that a data aggregation exercise will entail understanding these protocols, deploying hardware that is capable of connecting to these devices and processing the data at the edge before sending it to a “modern” device.

How can end-users solve this problem?

My opinion is that there’s no easy solution - manufacturers need to spend the money to either upgrade their equipment or invest in solutions that interface those devices. Obviously, you’d also need a competent party to navigate the technical design and implementation of these protocols.

Although you might be stuck with “old” protocols, you should be following a strategy that minimizes their presence in your facility in the future. In other words, as you upgrade, perform maintenance, and modernize some of your process, think about ways you can eliminate protocols that aren’t easy to work with, or have caused you grief (downtime). Keep in mind that the more complexity exists in your system, the more likely your team is to struggle every time there’s an issue. By consolidating to a single protocol, you’re eliminating some of that trouble.

Non-Standard System Design

The reality of most manufacturing facilities is that their process is tailor made for their organization. Outside of systems that have been mass produced, such as bottling lines, every process is unique. Furthermore, if you look “under the hood” of a piece of equipment, chances are, the group that designed and commissioned it created their own unique footprint at the facility. I’ve rarely seen two sites with identical equipment.

Why is this a problem?

Well, if you’re looking to extract certain metrics from a machine, you’re looking at a fixed amount of engineering effort needed to understand and extract those values. If that engineer can replicate their code onto other pieces of equipment, you’re likely to pay much less for all subsequent deployments of data extraction effort. However, since there’s little to no standardization between these machines and equipment, the engineering effort needs to remain constant throughout the organization. In other words, it will take a lot of time to instrument every piece of equipment.

Data Context

Context refers to the underlying information for a specific set of variables.

Picture this - you’re looking to analyze the performance of your cooking process and see if there’s an opportunity for improvement based on the batch process at every stage. In other words, you’d like to understand how long each step of the process takes and how quickly you can start the next batch after the proper cleaning measures took place.

You’re going to involve a systems integrator that will assess the situation and propose that they extract the data from your process into a historian, a database, or an interface from which you can understand what you need.

The problem is that the engineering team doesn’t always have full understanding of the process. They’re skilled programmers that can access the PLCs, HMIs, and SCADA systems to extract variables as they wish. They don’t always know that a specific flowmeter is tied to a valve in a different room which actuates a pump under tank x. The bottom line is that every value that is brought into a historian needs to have context. With the context and information about the variable, the teams looking at the data can have a clear understanding of what they’re looking at.

Unfortunately, context is often lost in current systems integration projects - most teams will integrate only what they need and use their own labeling system to get the project completed.

Forums / Community w/ SolisPLC

At SolisPLC, we use Discourse as the software of our forums - https://forum.solisplc.com/

We self-host the software on a Digital Ocean droplet which costs us about $12 / month to run.

I hadn’t updated the software in a while. Naturally, I logged in and was prompted to update the Discourse instance to the new version. If you’ve never done this, every major update requires you to update via command line. Sure enough, I recovered my Droplet password and accessed it via the console on Digital Ocean.

I ran the 3 commands given to me by Discourse and as I was worrying would happen, I received a message of “FAILURE.” I attempted to re-run the command to rebuild the package and it failed again. I noticed that it gave me the suggestion to use the “Discourse Doctor” via a command in the CLI.

I ran the command.

The outcome wasn’t much more positive - the tests revealed that there’s an issue and a fatal failure (as before).

Support Forums for Forum Software

The obvious approach to troubleshooting here was to get on the forums of those that built the software. I created a thread explaining the setup we’re running and the issue I had encountered.

To my surprise, I received a fairly prompt response from one of the members. I clicked the link in the notification email and to my surprise, he was trying to sell me his services. He was claiming that he’d fix the issue if I agreed to pay him for support - $300.

I’m not against paying for something, but I’m also not about to pay a random person what seemed to be a ransom - I followed the instructions given to me for the update and clearly the process failed. I wasn’t going to pay before I tried other avenues.

Long story short, after beating my head against the wall an entire afternoon, I decided it was time for the big guns - Digital Ocean provides a backup feature which stores the contents of the droplet on a regular basis. I chose the earliest backup and clicked on “restore.”

The forums are back in action.

Visit of OnLogic

New OnLogic Facility in Vermont

I’ve yet to finalize my post on servers. However, as I began immersing myself into my “home lab” and enterprise solutions, I’ve gained a lot of interest in understanding the hardware on the industrial side.

On Friday, I had the opportunity to visit the new OnLogic facility in Vermont. OnLogic, in case you’re not familiar, is a company that designs and builds industrial computers. They assemble the hardware, run tests, load software onto them, and ship to the customer.

Having worked with various hardware vendors in the past, I believe that I can create a good relationship with the company. On the SolisPLC side, we recommend various hardware to end-customers that take our training. It’s common for us to consult on the devices end-users should be buying to host Ignition instances, FactoryTalk View SE instances, etc.

Personal Progress w/ Vlad

I’m back at the gym hitting weights hard. Having taken a long break, the days back haven’t been easy - I’ve been incredibly sore the days after my workouts.

On another note, I’ve been working hard on improving my sleep. Although things are trending in the right direction, the doctor prescribed me some meds on Tuesday. I’m all for medicine, but from what I’ve read, no sleeping pill comes without consequences. After reviewing the long list of potential side effects and listening to a few testimonials on YouTube, I decided to wait.

Monday’s a day off in Canada - I’m looking forward to it.