Wylie Blanchard https://www.wylieblanchard.com/ Wylie Blanchard | Business Technology Expert, Digital Executive Advisor & Speaker - Wylie Blanchard Sun, 03 May 2026 02:53:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/cropped-Wylie-Blanchard-profile-photo_202008_IMG_7092_1100x1100-32x32.jpg Wylie Blanchard https://www.wylieblanchard.com/ 32 32 61397150 What Stable and Predictable IT Actually Looks Like https://wylieblanchard.com/what-stable-and-predictable-it-actually-looks-like/ Sat, 02 May 2026 19:40:00 +0000 https://www.wylieblanchard.com/?p=9596 Most teams are stuck in recurring IT issues that waste time and create risk. Learn what stable, predictable IT looks like and where to start fixing it...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
Reintivity exhibit booth at The Exchange 2026 featuring messaging about staying online, ending IT fire drills, and achieving uptime, with materials on managed IT, security, and workflow improvements for regulated organizations.

At The Exchange 2026 hosted by the Chicagoland Chamber of Commerce, I heard a version of the same concern again and again.

Leaders were not asking for more apps. They were not asking for a bigger stack. They were not asking for technology for technology’s sake.

They wanted fewer surprises.

They wanted support issues to stop turning into fire drills. They wanted less time lost to manual work. They wanted a better handle on security. And they wanted to understand where AI actually fits without creating more risk or confusion.

That is a healthy instinct.

For most organizations, especially lean teams and regulated teams, the goal is not to keep adding tools. The goal is to make operations more steady, more usable, and easier to trust.

Stable and predictable IT may not sound exciting, but it is what gives your team room to do good work.

Why so many teams still feel stuck

A lot of tech frustration gets blamed on outdated systems or limited budgets. Those are real issues. But they are usually not the whole story.

In many cases, the deeper problem is operational drift.

Over time, teams accumulate one more platform, one more workaround, one more inbox, one more approval step, one more process that nobody fully owns. The stack grows, but clarity does not. Support slows down. Small issues hang around too long. Manual work becomes normal. Security becomes something people talk about separately instead of something built into daily operations.

Then a new priority shows up. Maybe it is AI. Maybe it is automation. Maybe it is growth. Maybe it is compliance pressure.

Now the team is trying to move faster on top of a shaky foundation.

That is when leaders start saying things like:
“Why does this still take so long?”
“Why do we keep seeing the same issue?”
“Why does every improvement feel harder than it should?”

Those are usually not tool questions. They are operating model questions.

What stable and predictable IT actually looks like

When technology is working the way it should, the environment feels calmer.

Not perfect. Not silent. Just calmer.

Here is what that usually looks like in practice.

1. Support is measurable

If support feels random, the business feels random too.

Stable teams know what is coming in, what is repeating, what is aging, and what needs escalation. They can tell the difference between a true exception and a recurring pattern. They are not just closing tickets. They are reducing the reasons tickets happen in the first place.

A good question to ask is:
Do we know which issues are costing us the most time every month?

If the answer is no, start there.

2. Workflows are simpler than they used to be

Manual work has a way of hiding in plain sight.

A report gets rebuilt every week. Data gets copied from one system to another. A team member becomes the workaround. People memorize steps that should have been fixed six months ago.

When leaders talk about productivity, this is often the real issue. Not effort. Friction.

Stable IT reduces unnecessary steps. It makes routine work easier to complete, easier to train, and easier to support. It removes dependency on heroics.

A helpful question here is:
What repeat task wastes time every single week, and why are we still tolerating it?

3. Security is part of the operating rhythm

Security should not live in a separate conversation from operations.

If access is messy, if email risk is unmanaged, if approvals are inconsistent, or if users are unclear on basic expectations, the organization is carrying avoidable risk whether leadership sees it or not.

This matters even more when teams are experimenting with AI tools. You cannot safely move fast with new tools if your access controls, data handling practices, and user habits are loose.

Good security practices are usually not dramatic. They are consistent.

They show up in how access is granted, how changes are approved, how people handle email, how systems are reviewed, and how issues are documented.

A useful question to ask is:
Are our daily habits making the environment safer, or just more familiar?

4. Ownership is visible

One of the fastest ways to create confusion is to let a system, workflow, or recurring issue belong to everyone and no one.

Stable environments have clear owners.

Someone owns the tool.
Someone owns the workflow.
Someone owns the data.
Someone owns the next step when something breaks.

That does not mean one person does all the work. It means accountability is visible.

When ownership is unclear, problems sit. Work slows down. Frustration grows. People fill the gaps informally, which creates even more confusion later.

Ask this:
Who owns this process after launch, not just during setup?

That answer matters more than most teams realize.

5. Change does not break the business

A healthy environment can absorb change.

It can handle a new process, a new vendor, a new automation, or a new AI use case without throwing the whole team into reactive mode.

That is what leaders should want.

Not constant change for its own sake. Controlled change that the business can actually support.

Before adding another platform or pushing a broad AI initiative, ask whether the current environment can carry it. If the team is already buried in ticket churn, manual work, and unclear ownership, adding more tools will usually add more noise.

The basics still matter because the basics determine whether change becomes progress or just more disruption.

Reintivity team members at Booth 52 during The Exchange 2026, holding and displaying copies of “Zero-Downtime Care,” engaging attendees on reducing IT fire drills, improving system reliability, and creating more predictable operations.

Five questions to ask before you buy another tool

Before you add one more platform to the stack, take a step back and ask:

  1. What specific recurring issue are we trying to fix?
  2. Is this really a tool problem, or is it a workflow or ownership problem?
  3. What manual task is costing us the most time each week?
  4. What risk gets harder to manage if we add another system here?
  5. Who will own adoption, support, and cleanup after go-live?

These questions can save a team a lot of money and a lot of frustration.

A practical example

Sometimes a team says they need AI.

What they actually need first is to reduce ticket churn, tighten email and access practices, clean up one or two broken workflows, and make sure ownership is clear.

Once that foundation is in place, AI becomes easier to evaluate and safer to use. The conversation gets more practical. The risk gets easier to manage. The results are usually better.

The same is true for automation, reporting tools, and most other tech investments.

Better decisions start with a clearer operating baseline.


The real goal

The goal is not more complexity.

The goal is fewer surprises.

That means less friction, clearer ownership, steadier support, and security habits that hold up under pressure. It means building an environment your team can rely on, not just one they have learned to work around.

In healthcare, education, nonprofit, insurance, government, and other regulated settings, this matters even more. Downtime, weak controls, and recurring support issues do not stay contained. They ripple out into service, trust, and execution.

Stable and predictable IT is not flashy.

It is what lets people do their jobs with confidence.

If your team is dealing with the same repeat issue over and over, start there. You may not need another tool. You may need a clearer plan.

If you want a simple place to start, take inventory of one recurring issue, one manual workflow, and one security habit your team should no longer be working around. That exercise alone will tell you a lot.

Reintivity team members standing at Booth 52 during The Exchange 2026 at Soldier Field, speaking with attendees about reducing IT fire drills, improving security, and streamlining workflows for more stable and predictable operations.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9596
Why Good Work Gets Overlooked, and How to Make Your Impact Easier to See https://wylieblanchard.com/why-good-work-gets-overlooked-and-how-to-make-your-impact-easier-to-see/ Thu, 30 Apr 2026 09:00:00 +0000 https://www.wylieblanchard.com/?p=9444 Good work gets missed when the impact is hard to see. The shift happens when you stop listing effort and start showing outcomes leadership can use...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
A lot of capable professionals do meaningful work every week and still struggle to get the recognition, support, or advancement they expected.

Usually, the issue is not effort. It is visibility.

I was reminded of that during a recent Walden University Alumni interview. The conversation touched a common problem in both careers and leadership: important work often gets described too vaguely, documented too late, or handed off without clear ownership.

When that happens, the value is harder to see. Good work starts to look like routine activity. Wins get forgotten. Leaders miss the business impact. And when decisions about promotions, budgets, or support need to be made, the proof is not easy to find.

That is a problem for individual contributors. It is also a problem for managers, executives, and business owners.

Good work gets overlooked when the impact is invisible.

The first mistake is describing work like a task instead of a result.

A lot of professionals say things like:

“I led the project.”
“I managed the implementation.”
“I supported the rollout.”

Those statements may be true, but they do not tell leadership what changed.

Leadership is usually listening for a few simple things:

  • What changed?
  • Why did it matter?
  • What outcome improved?

That is why outcome language lands differently.

Instead of:
“I managed the implementation.”

Try:
“We completed the implementation on schedule, reduced follow-up issues, and gave leadership a clearer view of risk.”

Instead of:
“I led the project.”

Try:
“We cut response time by 28% and reduced escalation risk.”

The second version gives people something they can understand and remember. It makes your contribution easier to use in a staffing conversation, a performance review, an interview, or a budget discussion.

This is not about making ordinary work sound dramatic. It is about describing the real value clearly.

Why strong work still gets forgotten

Even when people know they should speak in outcomes, many still run into the same problem:

They did not capture the proof while the work was happening.

That has real consequences.

Promotions get missed because examples are vague.
Interviews feel weaker than they should because the best wins are hard to recall.
Managers try to advocate for someone with only part of the story.
Teams complete meaningful work, but months later no one can point to the evidence.

In a lot of cases, professionals do not have a performance problem. They have a documentation problem.

One habit helps more than most people realize: keep a career receipts file.

This does not need to be polished. It does not need to look like a resume. It just needs to be a simple place where you capture evidence as it happens.

What to capture in your receipts file

Keep it simple. When something important happens, write down:

  1. What changed
  2. What outcome improved
  3. What risk, cost, or delay was reduced
  4. What part you owned
  5. Any metric, deadline, or result that helps prove it

That may look like this:

Weak version:
“I supported the rollout.”

Stronger version:
“I helped complete the rollout on schedule, reduced follow-up issues, and gave leadership a clearer view of risk.”

Weak version:
“I worked on reporting improvements.”

Stronger version:
“I improved reporting turnaround, reduced manual rework, and gave leaders faster access to decision-ready information.”

You are not trying to write your annual review in real time. You are building a record that makes future conversations easier and more accurate.

That file can help with:

  • performance reviews
  • promotion discussions
  • job interviews
  • resume updates
  • team recognition
  • manager advocacy

Most people undersell themselves because they rely on memory. Memory is inconsistent. Evidence is much more useful.

Where leaders make this worse without realizing it

This issue does not sit only with employees.

Leaders often create the same problem when they fail to define what success looks like, what proof matters, and who owns the result.

That matters even more when outside support is involved.

You can hand off execution.
You cannot hand off accountability.

A consultant, vendor, agency, MSP, or implementation partner may own delivery tasks. They do not own your internal trade-offs, your business risk, or your final decisions.

That breakdown usually starts in a few predictable places:

  • Success criteria
  • Decision rights
  • Exception handling
  • Final sign-off

Once those areas get fuzzy, confusion turns into risk. The work may still get done, but the ownership story gets weaker. Teams start assuming someone else is tracking outcomes. Vendors assume the client will make the final call. Internal leaders assume the partner is carrying more accountability than they really are.

That is when good execution can still produce a disappointing result.

The strongest teams keep ownership visible, even when work is shared.

What good looks like in practice

Whether you are trying to grow your career or lead a team, the pattern is similar.

Good work becomes easier to support when you do four things consistently:

  1. Track outcomes, not just effort
    Do not stop at what was done. Capture what changed because it was done.
  2. Translate work into business language
    Speed, risk, cost, compliance, customer experience, staff efficiency, and revenue impact are easier for leadership to use than activity summaries.
  3. Save the proof while it is happening
    Do not wait until the annual review, the interview, or the board update to reconstruct the story.
  4. Keep accountability visible
    When work is shared, be clear about who defines success, who approves trade-offs, and who owns the final result.

These habits help at every level.

  • For professionals, they make your value clearer.
  • For managers, they make advocacy easier.
  • For executives, they improve decision quality.
  • For organizations, they reduce the gap between effort and recognition.

A simple question to ask yourself

Before your next review, interview, project update, or leadership meeting, ask:

If someone had to explain the value of my work in two sentences, would they have the proof to do it well?

That question gets to the heart of the issue.

Good work should not disappear because it was described like maintenance.
Good work should not be undervalued because nobody captured the outcome.
And good leadership should not assume accountability moved just because execution did.

When value is clear, support gets easier.
When proof is available, advocacy gets stronger.
When ownership stays visible, results hold up better.

That is true for careers. It is true for teams. And it is true for businesses trying to scale without losing clarity.

For more practical ideas on leadership, technology, and business execution, join my newsletter or explore more articles here on the site.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9444
Why Low-Code Projects Get Expensive When Expertise Shows Up Late https://wylieblanchard.com/why-low-code-projects-get-expensive-when-expertise-shows-up-late/ Sun, 19 Apr 2026 23:48:44 +0000 https://www.wylieblanchard.com/?p=9451 Low-code can speed delivery, but when governance, integration, and ownership show up late, the real cost starts after launch. The expensive part is...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>

Low-code can help teams move faster.

But speed at the beginning does not guarantee lower cost at the end.

A lot of leaders hear the same promise:
Build faster.
Launch sooner.
Clear the backlog.
Give the business what it asked for.

Then, after launch, the real invoice shows up.

I hear some version of this often:

“We built it in low-code.
It’s 90% there.
Can you help us finish the last 10%?”

Usually, the answer is no.

Not because the platform is bad.
Because the last 10% is often where the hard parts live.

That is where teams run into integration gaps, unclear ownership, weak access controls, support issues, reporting needs, and compliance questions that should have been addressed much earlier.

The app looked simple in week one.
Production made it expensive.

Why the last 10% costs so much

Most low-code projects start with a reasonable goal:
move faster and reduce manual work.

That part makes sense.

The problem is that many teams treat the early build like the whole project.
It is not.

The hard part is usually not getting a screen to work.
The hard part is making the workflow hold up in the real world.

That means asking questions like:

  1. Who owns the process after go-live?
  2. How does this connect to the rest of the environment?
  3. What happens when volume grows?
  4. Who approves access and monitors changes?
  5. What does support look like when the original builder moves on?

If those questions show up late, cost shows up late too.

Where cleanup usually starts

In most cases, cleanup begins in one of five places.

  1. Process
    The workflow gets built before the process is fully defined.
    That leads to rework, exceptions, and confusion after launch.
  2. Integrations
    Teams treat integrations like a follow-up task.
    Then they find out the app depends on data, systems, or handoffs that were never fully mapped.
  3. User adoption
    The people who actually use the workflow were not involved early enough.
    Now the tool works technically, but not operationally.
  4. Governance
    Access, data handling, audit needs, and oversight are added after the build is already moving.
    That gets expensive fast, especially in regulated environments.
  5. Ownership
    Nobody has a clear answer for who maintains the app, updates rules, handles support, or decides what changes next.

Low-code reduces build time.
It does not remove the need for sound decisions.

What leaders should ask before approving the build

Before a low-code project moves forward, I would want clear answers to these questions:

  • What business problem are we solving?
  • Which teams, systems, and data sources are involved?
  • Who will use it, approve it, support it, and own it?
  • What compliance, audit, or security requirements apply?
  • What has to be true for this to still work six months after launch?

Those questions slow down bad assumptions.
They also protect the budget.

A better way to think about speed

Speed is useful.
But speed without clarity usually turns into cleanup.

The most expensive app is often the one that looked easy in the first meeting.

That matters even more in healthcare, finance, education, and other regulated settings, where weak process design and late governance decisions create more than inconvenience. They create operational risk.

If a low-code project is already underway, the goal is not to panic.
The goal is to step back early enough to define the process, confirm ownership, review integrations, and address controls before the cleanup grows.

Low-code can be a smart move.

Just do not wait until the last 10% to bring in the thinking that should have shaped the first 90%.

Three-panel cartoon about low-code app development. In panel 1, a smiling man carries a toolbox labeled “Low-Code” past stacked platform logos, ignoring a distant solution expert; the caption says, “We can build this app ourselves.” In panel 2, he runs toward a crooked house labeled with flaws like weak requirements, no architecture, poor UX, bad integrations, no security review, and no governance. In panel 3, the house collapses at “Go Live,” causing confusion, rework, and security issues.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9451
Why AI Security Tools Fail in the First 30 Minutes of an Incident https://wylieblanchard.com/why-ai-security-tools-fail-in-the-first-30-minutes-of-an-incident/ Tue, 24 Mar 2026 08:24:00 +0000 https://www.wylieblanchard.com/?p=9496 In a breach, teams rarely fail from lack of alerts. They fail when the first 30 minutes turn into debate instead of action. Here's what better looks like...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
Bus shelter poster titled “The First 30 Minutes of a Breach” with the lines: “Don't debate. Decide. Unify signals. Prioritize actions. Automate safely.”

When a security incident starts, most teams do not lose time because they saw nothing.

They lose time because too many people are looking at too many signals and reaching for different next steps.

That first stretch matters more than most dashboards admit. It shapes containment, communication, escalation, and confidence. If the team spends those minutes debating instead of acting, the problem gets larger before the response gets clearer.

This is where a lot of AI security conversations go off track.

Leaders often ask whether the model is accurate, how many alerts it can process, or how much analyst time it can save. Those are fair questions. But during a live incident, one question matters more:

Can the system help the team choose the first right action?

If the answer is no, the rest of the promise does not matter much in the moment.

The real breakdown is not always detection

Security teams usually have data.

They may have endpoint alerts, identity signals, email warnings, firewall logs, cloud events, and user reports. The problem is not always visibility. The problem is that the team has not turned those inputs into a shared operating picture.

That gap creates a familiar pattern:

  • One person wants to isolate the device.
  • One person wants to wait for more evidence.
  • One person is checking whether the alert is duplicated elsewhere.
  • One person is trying to explain the issue to leadership before the facts are stable.

Now the first 30 minutes become a meeting instead of a response.

Attackers benefit from that confusion. Not because they were invisible, but because the team was stuck sorting signal from noise.

What useful AI should do in an incident

AI in security should not add another layer of output for analysts to interpret.

It should reduce ambiguity.

In practical terms, that means three things.

1. Pull the signals into one usable incident view

A responder should not need to jump across four tools to understand whether the same user, host, or account is involved in multiple alerts.

A useful AI layer should connect the evidence, summarize what belongs together, and show the timeline in plain language. It should help the team answer basic questions fast:

  • What happened first?
  • What systems or identities are involved?
  • What changed?
  • What looks confirmed versus assumed?

The goal is not a prettier dashboard. The goal is a shared view that helps the team move.

2. Rank the next actions, not just the alerts

Many teams are buried in medium-priority noise. That is a triage problem, not just a staffing problem.

The best support AI can provide is not another long list. It is a short list of recommended next steps with a clear reason behind each one.

For example:

  1. Disable the compromised session token.
  2. Isolate the endpoint tied to lateral movement.
  3. Preserve logs and notify the incident lead.

That kind of prioritization helps analysts act with discipline. It also helps managers explain the response path to executives without creating more confusion.

3. Automate the low-risk moves and gate the high-risk ones

Automation has value, but only when the team trusts the guardrails.

Low-risk steps can often be automated with confidence, such as enriching an alert, opening a case, gathering artifacts, or quarantining a clearly malicious email. Higher-risk actions, such as disabling a production identity, cutting access to a critical system, or blocking business traffic, need human approval.

The line should be clear before an incident starts.

A strong setup usually looks like this:

  • Low-risk actions can run immediately
  • Higher-risk actions require named approval
  • Every step is logged
  • Reversal steps are defined in advance

That is how teams move faster without creating a second incident during the first one.

The governance questions leaders should ask before rollout

Before approving AI for security operations, leaders should pressure-test the operating model, not just the feature list.

Start with these questions:

  1. What actions can the system take on its own?
  2. What data sources can it access and summarize?
  3. Which actions require human approval, and from whom?
  4. What is recorded for audit and after-action review?
  5. How do we reverse a bad action quickly?
  6. Who owns the workflow when the recommendation is wrong or incomplete?
  7. What happens when the system has low confidence?

These questions matter because incident response is not just a technical process. It is also an accountability process.

Where teams usually lose the most time

In my experience, delay usually shows up in one of three places.

Detection

The signal exists, but it is not trusted or seen quickly enough.

Triage

The team sees the issue, but cannot agree on urgency, scope, or ownership.

Proof

The team takes action, but struggles to confirm what actually happened, what was touched, and whether the issue is contained.

For many organizations, triage is the hidden bottleneck. Detection tools improve every year, but clear decision-making still lags behind.

That is why the first-action test is so useful. It cuts through marketing language and forces a practical question: when the pressure rises, does this help us decide, or does it give us one more thing to interpret?

Why this matters even more in regulated environments

In healthcare, finance, education, and other regulated settings, the first decision is rarely just about speed.

It is also about business continuity, data exposure, auditability, and downstream communication.

That changes the standard.

A response team does not just need fast recommendations. It needs recommendations that fit policy, preserve evidence, respect access boundaries, and support later review. If the AI layer cannot help within those constraints, it is not ready for a serious role in live response.


A security incident does not become dangerous only because someone missed an alert.

It becomes dangerous when the team cannot turn early signals into a clear first move.

That is the standard I would use for any AI security workflow. Before asking how advanced it is, ask whether it helps your team act with clarity in the first 30 minutes.

That answer will tell you more than any product demo.

If your team is reviewing AI for incident response, start by mapping where time is lost today: detection, triage, or proving what happened. That exercise usually reveals the real design problem.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9496
Boards don’t fund modernization – They fund proof https://wylieblanchard.com/boards-dont-fund-modernization-they-fund-proof/ Sun, 15 Mar 2026 15:08:00 +0000 https://www.wylieblanchard.com/?p=9434 Modernization gets funded when proof is clear: outcomes, guardrails, pilot thresholds, and one decision owner. In regulated settings, one page can...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
Bring one page: proof, timeline, guardrails.

You’ve been in this meeting.

“We like the idea, but define success.”
“What’s the minimum evidence?”
“What keeps us safe if it fails?”

In regulated settings, healthcare, finance, public sector, that’s how decisions get made.

Use this.

ONE-PAGE WIN DEFINITION

1) Outcome (what moves, by when)

  • Example: Reduce claim denials by X% in 90 days.

2) Adoption proof (who, what workflow, what target)

  • Example: Nurses complete discharge in under Y seconds.

3) Guardrails (what must stay true)

  • Example: Audit evidence auto-generated for A, B, C controls, with a defined rollback path.

4) Proof threshold (minimum pilot that counts)

  • This is the bar: 14 days, 3 workflows, 30 real users.
  • Pre and post time-on-task and error rate.
  • Audit artifacts produced as part of the workflow, not after the fact.

5) Timeline and decision owner (one clear decision point)

  • Pilot start date, readout date, go/no-go decision owner.

Align in this order:

  • Finance confirms funding gates and release criteria.
  • Ops confirms the workflow is real.
  • IT confirms integration risk and control evidence.
LinkedIn infographic titled “Boards fund proof” with subhead “One page: Proof Timeline Guardrails.” A printable “One-Page Win Definition” form shows five fill-in boxes: Outcome, Adoption proof, Guardrails, Proof threshold, and Timeline + decision owner. Right side panels list alignment order (Finance → Ops → IT) and board questions. Footer promotes wylieblanchard.com and Wylie Blanchard.

This content was originally posted on LinkedIn.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9434
How to Run a 60-Minute Ransomware Tabletop Before a Real Incident Hits https://wylieblanchard.com/how-to-run-a-60-minute-ransomware-tabletop-before-a-real-incident-hits/ Thu, 12 Mar 2026 17:17:00 +0000 https://www.wylieblanchard.com/?p=9486 A written incident plan is not enough. Here’s a 60-minute ransomware tabletop you can run tomorrow to test roles, decisions, and response gaps before...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
Blue and white graphic with a clipboard icon and the text: “A plan doesn’t save you. Practice does. Run a 60-minute tabletop.” Signed Wylie Blanchard.

Most organizations can point to an incident response plan.

Fewer can tell you, without hesitation, who is in charge, what gets isolated first, who approves emergency spending, and who owns the first message to staff when systems go down.

That gap matters.

In a ransomware event, the first hour is rarely about having perfect information. It is about clear ownership, fast decisions, and calm coordination across IT, operations, legal, communications, and compliance.

If you lead uptime, security, or operational risk in healthcare or an SMB, a short tabletop exercise can expose weak spots before an attacker does. The agenda below is simple enough to run tomorrow and useful enough to improve how your team responds under pressure.

The real test is not the document, it is the response

A written plan has value. But a plan that nobody has practiced often breaks down in the first few minutes of a real incident.

People hesitate.
Decision rights get fuzzy.
Too many people try to lead.
Not enough people know who can approve what.
Critical calls get delayed because nobody is sure who owns them.

That is why tabletop exercises matter. They turn policy into action. They show you whether your team can make decisions with time pressure, uncertainty, and real operational tradeoffs.

A simple ransomware scenario to run with your team

Use this prompt to start the discussion:

It is 7:00 AM. Staff cannot log in. IT confirms ransomware on three servers. What happens next?

This scenario works because it gets to the point quickly. No long setup. No complicated backstory. Just a realistic trigger that forces the team to make decisions.

A 60-minute tabletop agenda you can run tomorrow

0 to 10 minutes: Name the Incident Commander, confirm scope, set decision authority

Start with the basics.

Who is leading the response?
What do you know so far?
What decisions can be made immediately, and who can approve them?

If your team cannot identify the Incident Commander within seconds, that is a signal. You may have a written response plan, but not a usable one.

10 to 25 minutes: Decide what stays up, what gets isolated, and how to stop the spread

This is where operational tradeoffs show up fast.

Which systems are critical enough to protect at all costs?
Which systems need to be isolated now?
Who has authority to shut down access, disconnect devices, or pause workflows?

The goal here is not technical perfection. The goal is to contain the issue without making the disruption worse.

25 to 40 minutes: Call the outside partners and approve emergency spend

Many organizations lose time because they know they need outside help, but have not worked through the order of operations.

This is the moment to confirm:

  • Who contacts cyber insurance
  • Who contacts outside counsel
  • Who engages forensics
  • Who can approve emergency spending
  • Whether current contact information is easy to access

If those details live in one person’s inbox or memory, the exercise is doing its job by exposing that risk.

40 to 55 minutes: Assign one spokesperson and draft the first messages

Incidents create confusion fast, especially when employees, customers, patients, partners, or regulators may be affected.

Choose one spokesperson.
Draft the first internal message.
Set the external holding statement.
Clarify what would trigger notification requirements.

This part matters because silence creates its own problems. Teams need to know what to say, what not to say, and who owns the message.

55 to 60 minutes: Debrief and assign the top five fixes

Do not end the session when the clock runs out.

End it by capturing the top five issues the exercise exposed, assigning owners, and setting due dates.

Without that step, the tabletop becomes a calendar event instead of an operational improvement.

Keep the roles simple

You do not need a long cast of characters to make this exercise useful. Start with the core group:

  • Incident Commander: Owns the response and decision flow
  • IT Lead: Confirms technical scope and containment options
  • Legal Counsel: Advises on privilege, notification, and exposure
  • Cyber Insurance Contact: Helps activate the policy and required steps
  • Communications Lead: Owns internal and external messaging
  • Privacy or Compliance Lead: Assesses reporting thresholds and regulatory obligations
  • Operations or Clinical Lead: Brings the business or care-delivery impact into the room

In healthcare, that last role is especially important. Technical containment decisions can affect patient flow, scheduling, documentation, and other frontline operations. The response cannot live inside IT alone.

What good leaders should ask after the exercise

A short debrief can surface more value than the scenario itself. Ask questions like:

  • Could everyone identify the Incident Commander right away?
  • Were decision rights clear, or did people talk around ownership?
  • Did the team know which systems were truly mission-critical?
  • Were outside contacts, including insurance and counsel, current and accessible?
  • Did anyone discover a hidden dependency that would slow containment?
  • Were communications and notification triggers clear?
  • What five fixes would reduce confusion the fastest?

These are leadership questions as much as technical ones.

Why this matters even more in healthcare and other regulated environments

In healthcare, ransomware is not just a security issue. It can affect access to systems, staff coordination, patient communications, privacy obligations, and continuity of care.

The same is true in other regulated settings such as finance and education. When downtime intersects with sensitive data, reporting thresholds, and operational disruption, vague plans become expensive very quickly.

That is why a tabletop should test more than the technical response. It should test governance, escalation paths, communication discipline, and ownership under pressure.


The goal of a tabletop is not to prove your team is perfect.

The goal is to find confusion before a real incident does.

One focused hour each year can turn a static plan into something your team can actually run under pressure.

If you own uptime or security in healthcare or SMB environments, make this a recurring exercise, not a one-time discussion. Repetition is what builds confidence, speed, and better decisions when the stakes are real.

If this topic is part of your role, join my newsletter for one practical playbook each week on security, continuity, and IT leadership.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9486
Why Disabling Email Is Not Enough During Offboarding https://wylieblanchard.com/why-disabling-email-is-not-enough-during-offboarding/ Sun, 08 Mar 2026 08:58:00 +0000 https://www.wylieblanchard.com/?p=9474 Disabling email does not always remove access. Offboarding gaps often leave data, apps, and approvals exposed long after an employee exits, which means...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
Graphic reading “Email Off, Access On” with a disabled email icon on the left and open links to files, apps, and data on the right, showing offboarding gaps.

Many organizations treat offboarding like an account shutdown exercise. HR processes the exit. IT disables the email account. The identity record is turned off, and the team moves on.

That sounds complete, but it often is not.

In healthcare, education, and nonprofit environments, the bigger risk usually sits beyond the main account. Access can remain in shared drives, cloud apps, finance tools, vendor portals, and local systems that were never tied back to a central process in the first place.

That is where offboarding breaks down.

Offboarding has three separate control points

A clean exit process should cover three things:

Identity
Who the person is in the system.

Access
What systems and permissions they still have.

Data
What records, files, messages, or histories they can still reach.

Many teams handle the first one well. Fewer handle the second and third with the same discipline.

That gap matters because disabling identity does not always remove downstream access. A person can lose their primary login and still have active permissions in other places. In some cases, those paths remain open for weeks or months.

Where the gap shows up first

This problem tends to surface in the same types of systems:

  • Shared drives that contain patient, student, donor, or staff records
  • Financial platforms where approval rights were never fully removed
  • Vendor portals tied to an old inbox or a personal credential
  • Cloud applications authenticated outside the company’s single sign-on process
  • Collaboration platforms that still hold sensitive conversations and files
  • Password managers or shared service accounts
  • Local accounts created outside the HR and IT workflow

These are not edge cases. They are predictable misses.

The common thread is simple: anything outside your standard identity process is easier to overlook.

Why this keeps happening

Most offboarding gaps are not the result of bad intent. They are the result of fragmented ownership.

HR may own the separation workflow. IT may own the directory account. Security may review logs. Department leaders may know which tools the person actually used. Finance may control a separate approval platform. Operations may rely on local accounts no one formally tracks.

When nobody owns the full picture, controls become partial by default.

That is why organizations often think they have an offboarding process when what they really have is a series of disconnected actions.

A simple 90-day audit can tell you the truth

If you want a fast reality check, start with your last 90 days of terminations.

Use a simple review process:

  1. Pull the list of employees or contractors who exited in the last 90 days.
  2. Identify your 10 most critical systems.
  3. Pull last-login or activity reports for those former users.
  4. Compare any activity dates to the user’s exit date.

If a former employee still shows activity after separation, you likely have a control gap.

There is another signal to watch for: if a system cannot produce a reliable last-login report, that is a risk in itself. You cannot verify removal if you cannot verify access.

What stronger offboarding looks like

A better process does not need to be complicated. It does need clear ownership.

A practical model looks like this:

1. Identity: one stop point

Use a central identity process, ideally through single sign-on, as the trigger for offboarding. The goal is one reliable action that starts the shutdown sequence.

2. Access: role-based removal

Different jobs create different access footprints. A nurse, controller, case manager, registrar, and operations lead should not all use the same offboarding checklist. Build role-based checklists for the systems and privileges tied to each function.

3. Data: named owner confirmation

Every critical application should have a named owner. That owner should confirm access removal, transfer of files, and disposition of shared records within a defined window, such as 24 hours.

This shifts offboarding from assumption to accountability.

Why regulated organizations should care more

In regulated environments, offboarding is not just an IT housekeeping issue.

Healthcare organizations manage protected health information. Education organizations manage student records. Nonprofits often handle donor, program, financial, and beneficiary data across a wide mix of systems. When access does not match current employment or current role, the issue quickly moves beyond operations and into audit, privacy, and governance territory.

The risk is not only that a former employee can still get in.

The larger concern is that excess access often exists across the board. If former staff still have permissions, current staff may also have access they no longer need. That points to a broader access governance problem, not a one-off offboarding miss.

Questions leaders should ask now

If you want a stronger handle on this issue, start with five questions:

  • Which systems are included in our offboarding process today, and which are outside it?
  • Can we see last-login activity for every critical application?
  • Do we have role-based offboarding checklists, or just a generic termination ticket?
  • Does every critical system have a named business owner?
  • How quickly do we confirm access removal after an exit?

These questions can reveal weaknesses fast.


Shutting off email is not the same thing as shutting off access.

A complete offboarding process covers identity, permissions, and data exposure. If even one of those areas is left open, the organization is carrying unnecessary risk.

Start with a 90-day review. Check terminated users against your most important systems. Look for post-exit activity. Then assign ownership where the process is still vague.

That one review can tell you whether your offboarding process is really closing the door, or just turning off the lights.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9474
Most Board Cyber Briefings Are Built for Audits, Not Outages https://wylieblanchard.com/most-board-cyber-briefings-are-built-for-audits-not-outages/ Wed, 18 Feb 2026 14:16:00 +0000 https://www.wylieblanchard.com/?p=9464 Passing an audit does not mean you can operate through an outage. Here are five boardroom questions that reveal real cyber risk before...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
Cover graphic with the title “5 Cyber Questions Boards Should Ask” in blue text, with “Boards Should Ask” emphasized in bold outlined lettering. Centered below is a black line icon of a checklist and pen. The subtitle reads “Beyond compliance checkboxes.” “Wylie Blanchard” appears at bottom left, with a blue arrow at bottom right on a light gray background.

Many board cyber briefings are built to prove compliance.

They show that policies exist, training happened, and audits were cleared. Those things matter. They help establish accountability and reduce obvious gaps.

But they do not answer the question that matters most when systems are down and people are waiting:

Can the organization keep operating under pressure?

That is where real risk sits.

In regulated environments like healthcare and education, boards often receive updates that are technically correct but operationally incomplete. A clean audit may confirm that required controls are in place. It does not confirm that the organization can restore services quickly, make good decisions under stress, or continue serving people during a disruption.

Good governance requires more than evidence of compliance. It requires visibility into resilience.

Compliance Is Necessary, but It Is Not the Same as Readiness

Compliance helps organizations meet a standard. Readiness helps them keep functioning when something goes wrong.

That distinction matters.

An organization may have backups, documented policies, annual training, and favorable audit results. But when an outage hits, leadership still needs answers to practical questions:

  • How long will recovery take?
  • Who is making decisions?
  • What dependencies could slow response?
  • What happens if a key person is unavailable?
  • What will the disruption cost in operations, reputation, and recovery?

Those are not abstract questions. They shape whether an organization can continue delivering care, instruction, services, or support when systems fail.

Five Better Questions for the Boardroom

Here are five questions that surface operational risk faster than a standard compliance update.

1. If our systems went down tomorrow, how long until we are back up, and when did we last test that?

Compliance often asks whether backups exist.

A stronger board question asks whether recovery actually works.

Backups are only part of the story. The real issue is whether systems can be restored within a time frame the organization can tolerate. That means knowing recovery targets, validating dependencies, and testing restoration under realistic conditions.

If the answer is unclear, outdated, or based on assumptions rather than exercises, the organization may be carrying more risk than leadership realizes.

2. How long does it take us to patch critical issues, and who owns the delays?

Policies can say critical vulnerabilities must be addressed quickly.

That is not the same as knowing how long patching actually takes.

Boards should understand cycle time, exception handling, and where delays tend to happen. Is the issue staffing? Change approvals? Legacy systems? Vendor dependency? Competing priorities?

A measured process gives leadership something real to manage. A written policy without execution data leaves too much hidden.

3. Who can access our most sensitive data today, and when did we last review that list?

Access problems are often quiet until they are not.

Over time, permissions accumulate. Contractors stay active longer than expected. Former roles keep access they no longer need. Temporary exceptions become permanent. None of this is unusual, which is exactly why it deserves attention.

Boards do not need a technical dump. They need confidence that access to sensitive systems and data is reviewed regularly, justified clearly, and reduced when it is no longer needed.

That is how organizations limit exposure before an incident exposes it for them.

4. If our lead IT person is out for two weeks, can someone else step in using clear runbooks without dropping the ball?

Single points of failure are not only technical.

They also show up in people, process knowledge, vendor relationships, and undocumented workarounds.

Many organizations rely heavily on one or two trusted individuals who know how systems really work. That may feel efficient day to day. It becomes a serious risk during an outage, leadership transition, or extended absence.

Boards should ask whether critical responsibilities are documented, repeatable, and supported by clear runbooks. If not, continuity may depend too much on memory and availability.

5. What would a likely incident cost us in downtime, notifications, and recovery, and can we absorb it?

Cyber risk is often discussed in broad terms.

Boards need it translated into operational and financial impact.

What would a realistic incident mean for downtime, patient care, classroom disruption, customer service, regulatory response, legal support, communications, and recovery costs? How much of that can the organization absorb without major strain?

Insurance may help offset some losses. It does not reduce the need for leadership to understand the impact beforehand.

A board that understands incident cost is in a better position to make smarter investment, staffing, and resilience decisions.

What Boards Really Need From Cyber Briefings

A useful cyber briefing should do more than confirm that boxes were checked.

It should help leadership see where the organization is strong, where it is exposed, and what needs attention now. That means shifting at least part of the conversation from policy status to operational performance.

Boards do not need more jargon.

They need clear answers to practical questions like:

  • What could interrupt service?
  • How long could that interruption last?
  • What have we tested?
  • Where are we relying too heavily on one system, one vendor, or one person?
  • What is improving, and what is still unresolved?

That kind of briefing supports better governance because it makes risk visible in terms leadership can act on.


Good governance does not eliminate risk.

It makes risk visible, and it tests whether the organization can keep operating through pressure.

That is the difference between being audit-ready and being disruption-ready.

And in healthcare, education, and other regulated environments, that difference matters more than many board packets admit.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9464
Your staff doesn’t want a surprise update during peak hours https://wylieblanchard.com/your-staff-doesnt-want-a-surprise-update-during-peak-hours/ Sun, 11 Jan 2026 12:54:00 +0000 https://www.wylieblanchard.com/?p=9403 System updates don’t have to hijack peak hours. With clear windows, pilots, and real rollback plans, change becomes a non-event instead of a fire drill...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
Profile image of Wylie Blanchard in bottom-left corner. Image text: When is your next patch window—and who owns the pilot, the rollback, and the morning-after check?

If system updates keep catching people off guard, leaders aren’t setting the pace.

Make it a non-event:

1. Publish the calendar.
Quarterly change windows visible to Sales, Ops, and Finance. No surprise Tuesdays.

2. Explain the “why.”
Translate tech to business impact: “prevents login lockouts at open,” “avoids checkout errors during promos.”

3. Stage in rings.
Pilot on a small group (one location, one team) → expand → companywide. Rollback plan printed, not implied.

4. Freeze the right hours.
Protect peak periods (lunch rush, month-end close). Patch after hours with on-call coverage and a timed smoke test before open.

5. Test restores, not just backups.
If you can’t restore a laptop, database, or POS on the clock, you don’t have a safety net.

6. Own exceptions.
Legacy gear lags (manufacturing PCs, label printers, scanners). Track exceptions by owner and date. Mitigate until patched.

7. Coordinate vendors.
ERP, CRM, ecommerce, payments—get maintenance windows and incident terms in writing. Align your window to theirs.

8. Staff the floor.
Super users on deck the morning after. Short scripts for front desk/CSRs. Fast escalation path.

9. Measure it.
Next-day report: login success rate, app launch times, error rates, payments authorization success rate, tickets by location. Share the win—or the fix.

10. Close the loop.
Ask managers what still felt rough. Capture, adjust, move on.

Updates keep the business running when it counts.
Change is constant. Disruption doesn’t have to be.

When is your next patch window—and who owns the pilot, the rollback, and the morning-after check?


This content was originally posted on LinkedIn.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9403
We did it — Zero-Downtime Care just hit #1 bestseller on Amazon https://wylieblanchard.com/we-did-it-zero-downtime-care-just-hit-1-bestseller-on-amazon/ Sun, 21 Dec 2025 12:49:00 +0000 https://www.wylieblanchard.com/?p=9400 We did it—Zero-Downtime Care just became an Amazon #1 bestseller. Grateful for everyone pushing better uptime and care into the spotlight...

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
Animated GIF of the Amazon listing for “Zero-Downtime Care” by Wylie E. Blanchard Jr, with an orange arrow highlighting the #1 Best Seller badge in Medical Technology.

I’m grateful.
Grateful for every message, every share, and every person who supported the book and pushed this launch forward.

Thank you for helping bring more clarity, confidence, and calm into how healthcare leaders approach modernization. This win isn’t just about a book ranking — it’s about pushing better uptime, better care, and better outcomes into the spotlight.

If you’d like to help keep the momentum going, I’ve shared how you can support the book in the first comment.

More to come — and thank you again.

— Wylie


Want to continue supporting the effort? Learn how you can help at: https://www.zerodowntimecare.com/thank-you/


This content was originally posted on LinkedIn.

Get more great content at WylieBlanchard.com... Need a great speaker for your next event, contact us to book Wylie Blanchard now.
Learn what clients are saying about his programs....

]]>
9400