Volltext-Downloads (blau) und Frontdoor-Views (grau)

Analysis of Inference Parameters on Diffusion Based Image Generation

  • This thesis investigates the influence of inference parameters on the visual quality of images generated by state‐of‐the‐art diffusion‐based models, with a particular focus on applications in game asset production. Motivated by the increasing prominence of generative AI in creative industries and the need for efficient, high‐quality 2D asset creation, this study addresses a critical gap in theThis thesis investigates the influence of inference parameters on the visual quality of images generated by state‐of‐the‐art diffusion‐based models, with a particular focus on applications in game asset production. Motivated by the increasing prominence of generative AI in creative industries and the need for efficient, high‐quality 2D asset creation, this study addresses a critical gap in the literature, which has predominantly concentrated on prompt optimization and text–image alignment. Two models, Stable Diffusion 3 Medium and Flux.1, were employed to systematically explore how variations in CFG scale, denoise strength, noise seed, and sampler–scheduler pairings affect both structural fidelity and perceptual quality. Multiple batches of images were generated under controlled parameter adjustments and subsequently evaluated using a comprehensive suite of image quality assessment metrics—including SSIM, MS‐SSIM, LPIPS, Laplacian variance, SIFT keypoints, and Earth Mover’s Distance (EMD) in both frequency and Lab domains. The results reveal that the CFG scale exerts a non‐linear effect on image quality, with mid‐range settings yielding optimal structural and perceptual similarity, while excessively low or high values lead to fragmentation or homogenization of details. Adjustments in denoise strength demonstrated a trade‐off between noise reduction and the preservation of fine image details, as excessive denoising improved clarity at the expense of textural nuance. Moreover, variations in the noise seed parameter induced significant stochastic variability in the outputs, and the selection of sampler–scheduler pairs was found to cause abrupt transitions in visual characteristics, underscoring their critical role in the generative process. These findings have important implications for the deployment of generative AI in practical settings, suggesting that fine‐tuning inference parameters is essential to balance creative flexibility with production consistency.show moreshow less

Download full text files

  • Bachelorarbeit_Bittner_Daniel.pdf
    deu

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Bachelor Thesis
Zitierlink: https://opus.hs-offenburg.de/10171
Bibliografische Angaben
Title (German):Analysis of Inference Parameters on Diffusion Based Image Generation
Author:Daniel Bittner
Advisor:Sabine Hirtes, Stefano Gampe
Year of Publication:2025
Publishing Institution:Hochschule Offenburg
Granting Institution:Hochschule Offenburg
Place of publication:Offenburg
Publisher:Hochschule Offenburg
Page Number:52
Language:German
Inhaltliche Informationen
Institutes:Fakultät Medien (M) (ab 22.04.2021)
Collections of the Offenburg University:Abschlussarbeiten / Bachelor-Studiengänge / MI
DDC classes:000 Allgemeines, Informatik, Informationswissenschaft
GND Keyword:Bildqualität; Künstliche Intelligenz
Tag:AI Image Generation; Generative AII Inference; Image Diffusion; Inference
Formale Angaben
Open Access: Closed 
Licence (German):License LogoCreative Commons - CC BY-NC - Namensnennung - Nicht kommerziell 4.0 International