Interactive Logistic Regression Dashboard

Emotion Encoded

Simulate Predictors of AI Attitudes (N=44 and N=23 Survey Respondents)

Drag the slider on the **Demand for Transparency (Interactive)** card to observe how the **Odds Ratio** (\exp(\beta)) visually and statistically impacts significance (p).

Summary Interpretation

The first set of models (N=44) suggests a **cognitive-affective consistency** in algorithmic trust. Individuals who feel **comfortable with AI in objective roles** are nearly three times more likely to demand an explanation when automation fails, highlighting that rational clarity is highly valued by comfortable users (OR=2.96, p=.102). Critically, **Perceived Transparency** significantly increases **Trust in AI for Financial Advice** (OR=3.80, p=.038), a powerful, statistically significant finding.

The inferential analysis (N=23) shows that **Prior AI Use** fosters calibrated trust, with experienced users being approximately **twice as likely to trust** AI advice (OR=2.06, p=.043). While not statistically significant, the directional trend for the **Age Group 55+** suggests caution, as this group was about 33\% less likely to trust AI advice (OR=0.67, p=.225).