Idea evaluation for solutions to specialized problems: leveraging the potential of crowds and Large Language Models

dc.contributor.authorGimpel, Henner
dc.contributor.authorLaubacher, Robert
dc.contributor.authorProbost, Fabian
dc.contributor.authorSchäfer, Ricarda
dc.contributor.authorSchoch, Manfred
dc.date.accessioned2025-11-12T14:17:50Z
dc.date.available2025-11-12T14:17:50Z
dc.date.issued2025
dc.date.updated2025-10-30T14:47:48Z
dc.description.abstractComplex problems such as climate change pose severe challenges to societies worldwide. To overcome these challenges, digital innovation contests have emerged as a promising tool for idea generation. However, assessing idea quality in innovation contests is becoming increasingly problematic in domains where specialized knowledge is needed. Traditionally, expert juries are responsible for idea evaluation in such contests. However, experts are a substantial bottleneck as they are often scarce and expensive. To assess whether expert juries could be replaced, we consider two approaches. We leverage crowdsourcing and a Large Language Model (LLM) to evaluate ideas, two approaches that are similar in terms of the aggregation of collective knowledge and could therefore be close to expert knowledge. We compare expert jury evaluations from innovation contests on climate change with crowdsourced and LLM’s evaluations and assess performance differences. Results indicate that crowds and LLMs have the ability to evaluate ideas in the complex problem domain while contest specialization—the degree to which a contest relates to a knowledge-intensive domain rather than a broad field of interest—is an inhibitor of crowd evaluation performance but does not influence the evaluation performance of LLMs. Our contribution lies with demonstrating that crowds and LLMs (as opposed to traditional expert juries) are suitable for idea evaluation and allows innovation contest operators to integrate the knowledge of crowds and LLMs to reduce the resource bottleneck of expert juries.en
dc.identifier.urihttps://doi.org/10.1007/s10726-025-09935-y
dc.identifier.urihttps://hohpublica.uni-hohenheim.de/handle/123456789/18211
dc.language.isoeng
dc.rights.licensecc_by
dc.subjectIdea evaluation
dc.subjectCrowdsourcing
dc.subjectLarge language model
dc.subjectSpecialized knowledge
dc.subject.ddc650
dc.titleIdea evaluation for solutions to specialized problems: leveraging the potential of crowds and Large Language Modelsen
dc.type.diniArticle
dcterms.bibliographicCitationGroup decision and negotiation, 34 (2025), 4, 903-932. https://doi.org/10.1007/s10726-025-09935-y. ISSN: 1572-9907
dcterms.bibliographicCitation.issn0926-2644
dcterms.bibliographicCitation.issn1572-9907
dcterms.bibliographicCitation.issue4
dcterms.bibliographicCitation.journaltitleGroup decision and negotiation
dcterms.bibliographicCitation.originalpublishernameSpringer Netherlands
dcterms.bibliographicCitation.originalpublisherplaceDordrecht
dcterms.bibliographicCitation.pageend932
dcterms.bibliographicCitation.pagestart903
dcterms.bibliographicCitation.volume34
local.export.bibtex@article{Gimpel2025, doi = {10.1007/s10726-025-09935-y}, url = {1https://hohpublica.uni-hohenheim.de/handle/123456789/18211}, author = {Gimpel, Henner and Laubacher, Robert and Probost, Fabian et al.}, title = {Idea evaluation for solutions to specialized problems: leveraging the potential of crowds and Large Language Models}, journal = {Group decision and negotiation}, year = {2025}, volume = {34}, number = {4}, pages = {903--932}, }
local.subject.sdg9
local.subject.sdg13
local.subject.sdg17
local.title.fullIdea evaluation for solutions to specialized problems: leveraging the potential of crowds and Large Language Models

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
10726_2025_Article_9935.pdf
Size:
918.4 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
7.85 KB
Format:
Item-specific license agreed to upon submission
Description: