Here is an example of the “discursive dilemma” or “doctrinal paradox”: Say that a jury of three members must decide by majority rule whether a candidate should be accepted in a group. The rules specify that the candidate must meet two requirements, A and B, to be accepted. The first member of the jury believes that the candidate indeed meets the two requirements, the second member believes she only meets requirement A, and the third one believes she only meets B. The paradox arises when one compares two different methods to decide upon the candidate’s admission.
The first method requires that the jury vote on whether the candidate meets requirement A, and then whether it meets B. If the candidate gets a majority in the two votes she is accepted. Two of the three believe that condition A is satisfied (the first and the second members of the jury believe that), and also two of three believe the same about B (now it is the first and the third members). Accordingly, the candidate will be accepted.
The second method requires that the jury vote directly whether they think the candidate should be accepted. Now only the first member thinks both conditions are met and would vote in favor. The other two, each believing that one condition for acceptance is not met, will vote against.
We can understand the two voting methods as two different ways to aggregate information. The paradox shows that the majority rule gives inconsistent results when aggregating information about the premises versus the conclusions. In the example the conclusion follows after the conjunction of the premises, but many other logical propositions can be used to show the paradox. Furthermore, numerous results have shown that this is not a special feature of the majority rule. In fact, these results show the impossibility of finding aggregation methods that deliver logically consistent judgments, that is, that give the same outcome regardless of whether premises or conclusions are aggregated. For a survey on this paradox read List and Puppe (2009) ^{1}.
Given the impossibility of consistency, the next question is which approach, aggregating opinions about premises versus outcomes, is best. To address this problem one needs to be very specific of what is meant by “best”. Clippel and Eliaz (2015) ^{2} compare the two approaches in terms of their ability to aggregate information in the presence of strategic individuals with common interest. Contrary to what was assumed in the introductory example, strategic individuals may or may not report or vote according to their true information (the voting literature is full of examples in which voters do not). Common interest means that all voters want the same thing: to aggregate the information and to generate a correct conclusion after correct premises. In our example, that would mean that all three members of the jury want the candidate to be accepted in the premises are satisfied and want to know whether the premises are indeed satisfied. Strategic considerations in the doctrinal paradox where first addressed in Dietrich and List (2007) ^{3}, but only for unanimity aggregators (where all premises must be true to follow the conclusion).
Clipper and Eliaz (2015) make use of Bayesian games to develop their analysis. This means that the different beliefs of the members of the jury must come after each member having access to different information. In a Bayesian fashion, exante all members of the jury agree on the same a priori probabilities for each of the premises to be true. Then, each member has access to some private information (called a signal) that make them update their assessments according to Bayes rule. Since they may receive different information, they may end up with different beliefs about the premises. In their model, the signal is restricted to the values 0 and 1 for each of the premises and the decision rule comprises all supermajority rules (a proposition is accepted as true if a qualified majority of the voters agree that it is true, where the qualifying value may be any proportion of voters between a half for simple majority to almost unanimity).
Let us explain this with the opening example. There are four possibilities for the true state: (1,1), (1,0), (0,1), and (0,0), where a 1 in the first place of a signal means “the first premise is true”. A zero stands for “false”, and the second position is for the second premise. There is a prior probability for each of those states to be true, and the three members of the jury know those probabilities. Then, each member of the jury receives a signal about the state. The simplest case is that there are as many signals as states. The first member of the jury will receive a right or a wrong signal with different probabilities. For instance, if the true state is (1,1) he may receive the signal (1,1) with probability 0.7, the signal (1,0) with probability 0.2, and the signals (0,1) and (0,0) with probability 0.05 each. The other two members receive their signals with their own probabilities. Also, members do not know the signals received by the other two, but know the probabilities. Using Bayes rule, each member can compute the probability for each premise to be true. Also, they can compute the probability assigned by each of the other two members depending on the signals they receive.
In the premises game, after receiving their signals, individuals must vote yes or no to each proposition of the form “premise x is true”. Depending of the votes, a given premise will be declared “true” or “false”. In the outcome game, they vote in favor or against the acceptance of the logical conclusion. In both cases, each member of the jury wants to minimize the expected distance between the decision and the true state. For instance, if the true state implies that the candidate must be accepted (1), but in the equilibrium she is accepted with probability 0.6, then the distance is 10.6= 0.4. Of course, the jury members do not know whether the true state implies that the candidate must be accepted, but they can compute the probabilities given their signals and then can compute the expected true value, and the expected distance between the equilibrium and the expected true value
In this model, the authors are able to prove the following theorems:

For any finite group of individuals, gathering opinions about premises is systematically at least as good as gathering opinions about outcomes, but the converse is not true. More precisely, the first part says that for any symmetric Bayesian Nash Equilibrium in the outcomebased game there exists a symmetric Bayesian Nash Equilibrium in the premisebased game, such that for every vector of signal realizations, the strategy profile in the second game induces the same probability distribution over decisions as the first.

Generically, gains of the premisebased approach over the outcomebased approach can only be marginal when sufficiently many individuals express independent opinions.

Both approaches are almost always asymptotically efficient.
In plain English, the conclusion can be stated like this: although the premisebased method is better than the outcomebased one, it is only marginally better, as both tend to be efficient as the number of members increases except may be in extremely rare cases.
References
 List, C., and Puppe, C. 2009. Judgement aggregation. In Handbook of Rational and Social Choice, pp. 457–483. Chapter 19. ↩
 de Clippel, G., and Eliaz, K. 2015. Premisebased versus outcomebased information aggregation. Games and Economic Behavior 89, 34–42 ↩
 Dietrich, F., and List, C. 2007. Strategyproof judgment aggregation. Economics and Philosophy 23, 269–300. ↩