Off Prompt

AI Tools for Small Business

Operations

Using AI to create a simple interview question set and scoring sheet for hiring your first or next employee

AI interview questions for small business hiring: build a structured question set and scoring sheet in under an hour using ChatGPT or Claude.

Mara Chen 8 min read
Using AI to create a simple interview question set and scoring sheet for hiring your first or next employee

A bad hire at a $40,000 salary costs you roughly $12,000 in direct and indirect losses — that's the U.S. Small Business Administration's{:target="_blank"} estimate, pegged at 30% of first-year earnings. This post walks you through using AI to build a structured interview question set and scoring sheet tailored to your specific role — in under an hour. A structured interview is nearly three times more predictive of job performance than a casual conversation, according to the Schmidt & Hunter meta-analysis{:target="_blank"} — and it costs you nothing to build one with the AI tools you already have open.

What You Need Before You Start

ChatGPT{:target="_blank"} — OpenAI's conversational AI, capable of generating role-specific questions and scoring rubrics from a pasted job description. Pricing: the free tier (GPT-4o mini) handles this task; ChatGPT Plus{:target="_blank"} at $20/month unlocks GPT-4o and longer context windows, which matters if your job description runs long. Claude 3.7 Sonnet{:target="_blank"} (Anthropic) and Gemini 2.0 Pro{:target="_blank"} (Google) are equally capable alternatives — all pricing checked May 2026, check their sites, these change.

Time required: 45–60 minutes for a complete question set, scoring sheet, and one review pass. Add 10–15 minutes if you're building a Google Sheets scorecard from scratch.

Skill level: No technical background required. You need a job description — even a rough one — and an AI account. That's it.

How to Build AI Interview Questions for Small Business Hiring

  1. Open your AI tool of choice and start a new conversation.

  2. Paste your job description directly into the chat. If you don't have a formal one, write 3–5 sentences describing the role, team size, and the top two or three things this person will actually do every day.

  3. Type the following prompt, replacing the bracketed fields with your specifics:

You are helping a small business owner prepare for a job interview. Based on the job description below, generate 8–10 behavioral interview questions (STAR format: Situation, Task, Action, Result) grouped into 4–5 competency areas relevant to this role. For each competency, write 2 questions. Make the questions specific to a [company size, e.g., "10-person e-commerce company"] environment. Avoid any questions that touch on age, religion, national origin, marital status, disability, pregnancy, or any other legally protected characteristics under EEOC guidelines. Label each competency clearly.

Job description: [paste here]

  1. Review the output. You should see distinct competency headers — something like "Customer Communication," "Problem-Solving Under Pressure," "Attention to Detail" — each with two questions beneath them.

  2. Edit before you use anything. AI can produce questions that are too corporate for a small team, too vague to generate useful answers, or accidentally phrased in ways that brush against protected categories. Budget 10–15 minutes for this review — it's non-negotiable.

The difference between a generic prompt and a specific one is significant here. "Give me interview questions for a customer service job" produces boilerplate. "Give me behavioral interview questions for a part-time customer service rep at a 10-person e-commerce company who handles returns and social media complaints" produces questions that actually surface relevant experience. Role-specific prompting is not optional — it's what separates a useful output from a waste of time.

Building the Interview Scorecard Template

Once you have your questions, run a second prompt in the same conversation:

Now create a scoring sheet for these interviews. For each competency, include: a 1–5 rating scale with behavioral anchors (describe what a '1' answer looks like and what a '5' answer looks like), a notes field, and a weighted score. Assign weights to each competency based on what matters most for this role — explain your weighting rationale briefly. At the bottom, include fields for: candidate name, date, interviewer name, total weighted score, and a hire / no-hire recommendation. Format this so I can copy it into a Google Sheet.

What you should see: a table or structured list with competency names, anchor descriptions at each end of the 1–5 scale, a weight percentage per competency, and summary fields at the bottom. A well-structured output will weight competencies differently — for a sales role, "communication" might carry 30% of the total score; for an operations role, "attention to detail" might dominate. If the AI's weighting doesn't match your priorities, prompt it: "Increase the weight on [competency] to 35% and redistribute the rest."

  1. Copy the scoring sheet output into Google Sheets{:target="_blank"} or Google Docs{:target="_blank"} — both free. Build it once, reuse it for every candidate for this role. Side-by-side comparison across candidates is where the structured format pays off: you're comparing numbers, not impressions.

Skipping the behavioral anchors is the single most common shortcut that breaks this process. Without them, two interviewers rating the same candidate on "communication" can score it differently — which defeats the purpose of a consistent sheet.

The EEOC{:target="_blank"} prohibits interview questions that touch on age, national origin, religion, marital status, disability, pregnancy, and several other protected characteristics. Here's the catch: AI models are not legal compliance engines. They can be instructed to avoid these topics, and the prompt above does exactly that — but you remain legally responsible for every question you ask in an interview, regardless of where it came from.

Run a specific check before finalising your list. Prompt the AI: "Review these interview questions and flag any that could be interpreted as touching on legally protected characteristics under U.S. EEOC guidelines. Be conservative." Then read that output critically. Questions that seem neutral can veer into protected territory — "Are you available to work weekends?" is generally fine; "Do you have childcare arrangements that would prevent weekend work?" is not.

The honest answer is that AI is useful here as a first-pass filter, not a legal sign-off. If you're hiring in a regulated industry or your state has additional employment law layers, have an employment attorney review the final list. That hour of legal time costs far less than a discrimination claim.

When Something Goes Wrong

The questions are too generic to be useful. Root cause: your job description input was too vague. Fix: go back and add specifics — team size, tools used, top three daily tasks, and one or two things that make this role harder than it looks. Re-prompt with that detail.

The scoring sheet weights don't match what actually matters for your role. Root cause: the AI guessed at your priorities from the job description alone. Fix: explicitly state your priority competencies in the prompt: "The most critical competency for this role is [X] — weight it at 40%."

A question in the output could be legally problematic. Root cause: AI doesn't have perfect calibration on EEOC rules, particularly for questions that seem neutral but imply protected characteristics. Fix: run the dedicated legal review prompt above, remove any flagged question immediately, and do not attempt to rephrase — drop it entirely and ask AI to generate a replacement on a different angle.

What to Do Next

Use the same scoring sheet for every candidate who interviews for this role and compare total weighted scores before you make a final call. The numbers won't make the decision for you, but they will force a more honest conversation — especially when gut instinct and scorecard diverge, which is exactly when bias tends to win. For building out the rest of your hiring workflow, see how to write a job description with AI and onboarding your first hire with AI-assisted documentation.

FAQ

Can I use AI interview questions for any type of small business role? Yes, with more or less editing depending on how specialized the role is. AI handles common roles — customer service, operations, sales, admin — with strong output on the first pass. Highly technical or licensed roles (e.g., electrician, licensed accountant) need heavier review because the competency framework AI generates may miss domain-specific performance criteria. In those cases, use AI to build the structure and fill in the technical questions yourself.

Does using a structured scorecard actually change hiring outcomes? The research says yes. Structured interviews with consistent questions and scoring criteria have predictive validity of 0.51–0.58 versus roughly 0.20 for unstructured conversations — nearly three times more predictive of job performance, per the Schmidt & Hunter meta-analysis. For a small business where one bad hire represents $12,000 in losses on a $40,000 role, that difference in predictive accuracy is worth the hour it takes to build the sheet.

Do I need a paid AI plan to do this? No. ChatGPT's free tier, Claude's free tier, and Gemini's free tier are all sufficient for this task as of May 2026 — check their sites, these change. The paid plans ($20/month for ChatGPT Plus, $20/month for Claude Pro) give you longer context windows, which helps if your job description is detailed or you're generating materials for multiple roles in one session. For a single role, free works fine.

What's the cost of not having a structured interview process? Beyond the $12,000 bad-hire estimate, the less visible cost is time: re-posting, re-interviewing, and onboarding a replacement typically takes 4–8 weeks for a small team. For a 5-person company, a bad hire in a key role can stall operations for an entire quarter. A 2023 SHRM survey found that 68% of small businesses under 100 employees have no formal interview process — per SHRM{:target="_blank"} — and are absorbing that cost repeatedly without realising it's avoidable.

Can I reuse the same scorecard for future hires in the same role? Yes, and you should. That's the point of building it once. The scorecard for a customer service rep is largely stable across candidates — adjust the weighting only if the role evolves meaningfully between hiring cycles. Over time, you'll also accumulate data: which scores correlated with the candidates who worked out, and which didn't. That feedback loop makes each subsequent hire slightly more calibrated than the last.

Was this useful? ·