Leaders already sense when something shifts. The quieter meetings. The delayed responses. The subtle withdrawal before anyone hands in a resignation. That instinct is real — it's pattern recognition built from years of experience.
But instinct doesn't produce evidence. It doesn't tell you which team, which structural driver, or how much time you have left to act. It gives you a feeling in a boardroom full of opinions.
inPsyq gives that instinct an instrument. The patterns a trained psychologist would recognise over weeks of observation — cognitive load compounding, trust thinning between functions, engagement silently decaying — are now resolved mathematically. Continuously. Across your entire organisation.
Ten carefully constructed psychometric items — grounded in thirty years of occupational health research — feed a mathematical model that most organisations don't know exists. It tracks psychological dimensions as evolving states. Quantifies uncertainty per team per week. Separates structural causes from individual noise. And synthesises everything into intelligence calibrated for each stakeholder's role.
No one fills out forms for twenty minutes. The model extracts what it needs. The rest is mathematics.
Risk-stratified team portfolio. Systemic drivers ranked by influence. Teams flagged before deterioration reaches performance metrics. A board-ready briefing synthesised from the week's inference — not summarised from last month's text.
When your team's numbers move, the system tells you why — is it internal role ambiguity, external dependency overload, or cross-team friction? Interventions are gated by statistical confidence. You act on structure, not guesswork.
Your identity never leaves the model. K-anonymity ensures no individual signal is recoverable. What you get: the knowledge that your voice was heard structurally — not as a data point, but as part of a living model that drives real decisions.
When psychological strain is detected at the structural level — not reported anecdotally at an HR offsite — leadership can act while the intervention window is still open. The difference between a conversation and a resignation is often three weeks.
The briefing doesn't say 'the team seems stressed.' It says which dimension moved, by how much, with what confidence, caused by which structural factor. Leadership debates shift from 'I think' to 'the data shows.'
Annual surveys produce snapshots. inPsyq produces a living signal — every week, every team, with uncertainty quantified. Trends become visible. Deterioration is caught in motion, not in retrospect.
The science, the mathematics, and the psychology behind the system.
The instrument is built on thirty years of occupational health psychology. Each of the ten items targets a distinct dimension — autonomy, role clarity, psychological safety, workload pressure, dependency load, belonging, and four composite constructs that feed the index model.
The items are not opinion questions. They are psychometric probes — each validated against clinical benchmarks, each designed to elicit a response that the model can decompose into latent state estimates. The instrument is short by design: two minutes preserves response rates above 85%, which is the statistical threshold for reliable team-level inference.
Every item has been tested for internal consistency, test-retest reliability, and discriminant validity across industries. The dimensional structure is invariant to team size, seniority distribution, and cultural context — verified across deployment cohorts.
Coming soonResponses enter a Bayesian latent-state model — a Kalman filter variant that treats each psychological dimension as a time-evolving hidden state. Unlike snapshot analyses, the model maintains a belief distribution over each dimension, updating it weekly with new evidence while carrying forward the accumulated signal.
Uncertainty is not an afterthought. Every estimate comes with a confidence interval computed from the posterior distribution. When coverage is low or responses are inconsistent, the bands widen automatically — the system knows what it doesn't know.
Causal attribution operates on the structured signal. The engine separates endogenous drivers — role clarity deficits, workload imbalance, autonomy constraints — from exogenous noise. It identifies which structural factor is moving the signal, with what magnitude, at what confidence level. The output is not 'your team is stressed.' It's 'role ambiguity in Engineering is driving a 15-point strain increase with 89% confidence.'
Coming soonThe final stage transforms structured mathematical output into human-readable intelligence. A language model trained on organisational psychology literature synthesises the full 48-dimensional signal vector into role-calibrated briefings.
Executive briefings are systemic: they name organisational risk, quantify probability, and recommend resource reallocation. Team-level briefings are operational: they isolate the specific structural driver, name the intervention, and scope the timeline.
The briefing is generated fresh from the latest inference run — it is never a summary of previous text. Every sentence can be traced back to a specific model output. This is not generative content. It is structured intelligence, rendered in natural language.
Privacy is architectural. Individual responses are aggregated before model input. K-anonymity thresholds are configurable per organisation. No individual's signal trajectory is recoverable from any output — not by the system, not by leadership, not by the LLM.
Coming soonExperience how inPsyq translates raw psychological signals into actionable executive and operational intelligence.
inPsyq is deployed white-glove. Configured, calibrated, and integrated — in days, not months.
Get in touchBy invitation only.