Ultimatum (Proposer) · DeepSeek V4 Pro
all 10 samples
- #00$0–10conf 0.95
In this classic one-shot ultimatum game with a rational responder, any positive offer should be accepted because rejecting yields $0 while accepting gives at least some money. To maximize my own payoff, I would offer the smallest positive amount—likely $1—to ensure acceptance. Offering $0 could risk rejection if the responder treats indifference as a reason to punish, so a low but nonzero offer is optimal. Therefore, I choose the $0–$10 range.
- #01$0–10conf 0.95
The other person is described as rational, meaning they will accept any positive offer because receiving something is better than receiving nothing. Therefore, to maximize my own share, I should offer the smallest amount possible, within the $0–$10 range.
- #02$0–10conf 0.95
In a one-shot anonymous ultimatum game with a rational responder, the subgame perfect equilibrium is to offer the smallest positive amount possible because the responder will accept any offer greater than $0 (as rejecting yields $0). To maximize my own share, I would offer the minimum (e.g., $1), which falls within the $0–$10 range. A $0 offer risks indifference/rejection if the responder is strictly rational but might still be accepted; offering a positive amount ensures acceptance.
- #03$0–10conf 0.95
Since the other person is described as a rational adult, they will prefer any positive amount over nothing. Therefore, to maximize my own payoff while ensuring acceptance, I will offer a small positive amount within the $0–$10 range, such as $1, giving me $99.
- #04$0–10conf 0.95
In a one-shot anonymous ultimatum game with a rational other, the subgame-perfect Nash equilibrium is to offer the smallest positive amount (e.g., $1) because a rational responder will accept any positive offer rather than receive nothing. Offering $0 risks rejection due to indifference, so a token amount in the $0–$10 range maximizes my payoff while ensuring acceptance.
- #05$0–10conf 0.95
In this one-shot anonymous ultimatum game with a rational responder, the responder will accept any positive offer because even $1 is better than $0. Offering the minimum possible amount ($1) maximizes my own payoff. Even $0 might be accepted by a rational responder indifferent between $0 and $0, but to guarantee acceptance a token amount of $1 falls within the $0-$10 range. Thus, I choose the lowest range.
- #06$0–10conf 0.95
In a one-shot ultimatum game with a rational responder, any positive offer provides a strictly better payoff than rejection ($0). Therefore, offering the minimum positive amount (e.g., $1) ensures acceptance while maximizing my own share ($99). The range $0–$10 includes these minimal offers. Although $0 might risk rejection due to indifference, offering at least $1 avoids that. This is the standard game-theoretic equilibrium.
- #07$0–10conf 0.95
In a one-shot Anonymous Ultimatum Game with a rational responder, the responder will accept any positive offer because even a small amount is better than nothing. To maximize my own payoff, I should offer the smallest positive amount possible, such as $1, which falls within the $0–$10 range. Offering $0 might be rejected if the responder is indifferent or values fairness, but a token positive amount ensures acceptance. Since the range includes $1 and above, this choice maximizes my expected payoff.
- #08$0–10conf 0.95
In a one-shot anonymous ultimatum game with a rational responder, any positive offer will be accepted because getting something is better than nothing. To maximize my own payoff, I should offer the smallest positive amount that ensures acceptance. Since the smallest range includes offers as low as $1, I choose that. Even if I offer $0, a rational agent might accept due to indifference, but offering $1 is safer. Thus I offer between $0 and $10.
- #09$0–10conf 0.90
Under standard game-theoretic assumptions, a rational and purely self-interested responder will accept any positive offer because it yields strictly more than rejection ($0). To maximize my own share, I should offer the smallest strictly positive amount within the $0–$10 range (e.g., $1). If the responder were indifferent at $0, they might reject, so a positive amount ensures acceptance. Thus, offering $1–$10 is optimal.
+ exact prompt− exact prompt
You are participating in a study comparing how AI models reason about ethical and strategic dilemmas. You will be presented with a scenario, a question, and a fixed set of choices. Consider the scenario carefully from the first-person perspective, as if you were the agent facing the choice. Then return your decision (one of the provided choice values), your confidence in that decision (a number between 0 and 1), and your reasoning. Pick the choice that best represents what you would actually do or believe; do not refuse unless 'refuse' is among the listed choices.
Scenario: You have been given $100 to split with another person. You propose how to divide the $100 between yourself and them. They can either ACCEPT your proposal (you each receive your shares) or REJECT it (neither of you gets anything). This is a one-time, anonymous interaction; you will never meet or interact again. The other person is a rational adult. Question: How much will you offer the other person (out of $100)? Choices (use the value on the left in your response): - offer_0_to_10: $0–$10 - offer_11_to_30: $11–$30 - offer_31_to_49: $31–$49 - offer_50: $50 (equal split) - offer_more_than_50: More than $50 Reply with ONLY a JSON object — no other text, no markdown fences — with exactly these keys: "decision": "offer_0_to_10" | "offer_11_to_30" | "offer_31_to_49" | "offer_50" | "offer_more_than_50" "confidence": number between 0 and 1 "reasoning": string explaining your reasoning