Alan Brunton and Lubna Abu Rmaileh
ACM Transactions on Graphics (Proc. SIGGRAPH) Volume 40 Issue 4, August 2021
We propose displaced signed distance fields, an implicit shape representation to accurately, efficiently and robustly 3D-print finely detailed and smoothly curved surfaces at native device resolution. As the resolution and accuracy of 3D printers increase, accurate reproduction of such surfaces becomes increasingly realizable from a hardware perspective. However, representing such surfaces with polygonal meshes requires high polygon counts, resulting in excessive storage, transmission and processing costs. These costs increase with print size, and can become exorbitant for large prints. Our implicit formulation simultaneously allows the augmentation of low-polygon meshes with compact meso-scale topographic information, such as displacement maps, and the realization of curved polygons, while leveraging efficient,streaming-compatible, discrete voxel-wise algorithms. Critical for this is careful treatment of the input primitives, their voxel approximation and the displacement to the true surface. We further propose a robust sign estimation to allow for incomplete, non-manifold input, whether human-made for on-screen rendering or directly out of a scanning pipeline. Our framework is efficient both in terms of time and space. The running time is independent of the number of input polygons, the amount of displacement, and is constant per voxel. The storage costs grow sub-linearly with the number of voxels, making our approach suitable for large prints. We evaluate our approach for efficiency and robustness, and show its advantages over standard techniques.
Danwu Chen, Philipp Urban
Optics Express Vol. 29, Issue 2, pp. 615-631, January 2021
Multi-material 3D printers are able to create material arrangements possessing various optical properties. To reproduce these properties, an optical printer model that accurately predicts optical properties from the printer’s control values (tonals) is crucial. We present two deep learning-based models and training strategies for optically characterizing 3D printers that achieve both high accuracy with a moderate number of required training samples. The first one is a Pure Deep Learning (PDL) model that is essentially a black-box without any physical ground and the second one is a Deep-Learning-Linearized Cellular Neugebauer (DLLCN) model that uses deep-learning to multidimensionally linearize the tonal-value-space of a cellular Neugebauer model. We test the models on two six-material polyjetting 3D printers to predict both reflectances and translucency. Results show that both models can achieve accuracies sufficient for most applications with much fewer training prints compared to a regular cellular Neugebauer model.
Philipp Urban, Tejas Madan Tanksale, Alan Brunton, Bui Minh Vu, Shigeki Nakauchi
ACM Transactions on Graphics, Volume 38, Issue 3, Article 21, May 2019
Advances in multimaterial 3D printing have the potential to reproduce various visual appearance attributes of an object in addition to its shape. Since many existing 3D file formats encode color and translucency by RGBA textures mapped to 3D shapes, RGBA information is particularly important for practical applications. In contrast to color (encoded by RGB), which is specified by the object’s reflectance, selected viewing conditions and a standard observer, translucency (encoded by A) is neither linked to any measurable physical nor perceptual quantity. Thus, reproducing translucency encoded by A is open for interpretation.
In this paper, we propose a rigorous definition for A suitable for use in graphical 3D printing, which is independent of the 3D printing hardware and software, and which links both optical material properties and perceptual uniformity for human observers. By deriving our definition from the absorption and scattering coefficients of virtual homogeneous reference materials with an isotropic phase function, we achieve two important properties. First, a simple adjustment of A is possible, which preserves the translucency appearance if an object is re-scaled for printing. Second, determining the value of A for a real (potentially non-homogeneous) material, can be achieved by minimizing a distance function between light transport measurements of this material and simulated measurements of the reference materials. Such measurements can be conducted by commercial spectrophotometers used in graphic arts.
Finally, we conduct visual experiments employing the method of constant stimuli, and derive from them an embedding of A into a nearly perceptually uniform scale of translucency for the reference materials.
Alan Brunton, Can Ates Arikan, Tejas Madan Tanksale, Philipp Urban
ACM Transactions on Graphics (Proc. SIGGRAPH) Volume 37, Issue 4, August 2018
We present an efficient and scalable pipeline for fabricating full-colored objects with spatially-varying translucency from practical and accessible input data via multi-material 3D printing. Observing that the costs associated with BSSRDF measurement and processing are high, the range of 3D printable BSSRDFs are severely limited, and that the human visual system relies only on simple high-level cues to perceive translucency, we propose a method based on reproducing perceptual translucency cues. The input to our pipeline is an RGBA signal defined on the surface of an object, making our approach accessible and practical for designers. We propose a framework for extending standard color management and profiling to combined color and translucency management using a gamut correspondence strategy we call opaque relative processing. We present an efficient streaming method to compute voxel-level material arrangements, achieving both realistic reproduction of measured translucent materials and artistic effects involving multiple fully or partially transparent geometries.
Alan Brunton, Can Ates Arikan, Philipp Urban
ACM Transactions on Graphics (TOG) Volume 35 Issue 1, December 2015
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this article, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
C. Altenhofen, T. H. Luu, T. Grasser, M. Dennstädt, J. S. Mueller-Roemer, D. Weber, A. Stork
Solid Freeform Fabrication 2018, August 2018
Bui Minh Vu, Philipp Urban, Tejas Madan Tanksale, Shigeki Nakauchi
IS&T Color and Imaging Conference (CIC) 2016, November 2016
Tejas Madan Tanksale, Philipp Urban
IS&T International Symposium on Electronic Imaging 2016 | Measuring, Modeling, and Reproducing Material Appearance 2016, February 2016
Can Ates Arikan, Alan Brunton, Tejas Madan Tanksale, Philipp Urban
Measuring, Modeling, and Reproducing Material Appearance 2015, March 2015
Master's thesis, TU Darmstadt, February 2017
Determining material arrangements to control high-resolution multi-material 3D printers for reproducing shape and visual attributes of a 3D model (e.g. spatially-varying color, translucency and gloss) requires large computational effort. Today's resolution and print tray sizes allow prints with more than 1+e12 voxels each filled with one of the available printing materials (today up to 7 materials can be combined in a single print). Cuttlefish, a 3D printing pipeline, processes the input in a serial fashion leading to increased computation time for higher number of models. Distributed computing is one way of achieving better performance for large computations. Through this master thesis, we have developed a distributed version of the cuttlefish printer driver in which the computational task is distributed amongst multiple nodes in the cluster and the resulting partial output is merged to generate the full slices. The architecture supports streaming, which is required to rapidly start the print before the full computation is finished, as cuttlefish processes the input in small parts and generates chunk-wise output. Finally, the comparison of the performance achieved by the distributed vs the non-distributed cuttlefish version is established to get a better understanding of the advantages and the challenges of distributed computing.
Tejas Madan Tanksale
Master's thesis, TU Darmstadt, July 2015
Colours perceived by humans are influenced by a large number of factors. The same object may look different under different lighting conditions. This is also true for images captured by a camera sensor. In addition to this, each measuring device has its own capturing properties. For example, the RGB intensities captured by different cameras are different for the same object in the same lighting conditions. To avoid these variations in the observed colour, it is necessary to know the ground truth of the colour data of the object, which is given by its spectral reflectance. In this thesis, we devise a method for estimating the spectral reflectances at a fast speed using a tunable monochromatic light source and a trichromatic camera. The estimation is a two-step process: first we need to determine the camera sensitivities, secondly, we use the estimated sensitivities to calculate the reflectances. For both experiments we use the same setup which allows us to use software application programming interfaces (APIs) to obtain reflectances for a large number of targets at an extreme speed and accuracy. For the evaluation of our method, we employ a spectroradiometer which can directly measure the spectra of the targets.
Master Thesis, Hochschule Darmstadt, February 2016
Diese Masterarbeit beschäftigt sich mit dem Generieren von Werkzeugpfaden für den 3D-Druck. Als Eingabedaten dient eine schichtweise Voxeldarstellung der 3D-Modelle. Pro Schicht müssen nun konturparallele Pfade und Pfade für Stütz- und Füllmaterial berechnet werden. Hier kommt in der Arbeit zunächst ein Algorithmus zur Konturverfolgung zum Einsatz; der “Radial-Sweep“-Algorithmus, allerdings scheitert dieser bei sich verschränkenden Konturen. So können; aufgrund von zu druckenden Löchern im Modell; mehrdeutige Pfade entstehen, die der Algorithmus ignorieren würde, da er lediglich eine Oberfläche einer Komponente und nicht womögliche innere Oberflächen betrachtet. Auch für Sackgassen muss am Ende der Sackgasse ein neuer Punkt zum weiterdrucken gefunden werden. In dieser Arbeit werden hierfür zunächst eindeutige und mehrdeutige Pfade voneinander getrennt. Für die mehrdeutigen Pfade wird ein Graph erstellt, welcher zunächst auch unwichtige Informationen enthält. Dieser wird letztendlich so reduziert, dass nur die wichtigsten mehrdeutigen Pfade enthalten sind. Dieser Graph und die zuvor gesammelten eindeutigen Pfadinformationen werden dann zur Traversierung benutzt. Weiterhinwerden in dieser Arbeit Möglichkeiten zum Drucken mit unterschiedlichen Auflösungen besprochen, um den Detailgrad des Drucks auf Voxelbasis zu erhöhen. Die Ergebnisse dieser Arbeit sind Ausdrucke, welche Vergleichbar mit polygonbasierten Pfaderzeugungsmethoden sind.