[Brito et al.]

Multimodal Augmentation of Surfaces Using Conductive 3D Printing

 
Caio Brito, João Marcelo Teixeira, Gutenberg Barros, Veronica Teichrieb, Walter Correia

Figure 1: Registered 3D printing over the drawing placed on printer's bed.

Figure 2: Connections between printed content using conductive filaments and Makey Makey board for audio feedback.

Figure 3: User sensing the assistive content and triggering the audio associated to the elements being touched.

Accessible tactile pictures (ATPs) consist of tactile representations that convey different kinds of messages and present information through the sense of touch. Traditional approaches use contours and patterns, which create a distinct and recognizable shape and enables separate objects to be identified. The success rate for recognizing pictures by touch is much lower than it would be for vision. Second, some pictures are more frequently recognized than others. Third, there is also some variation from individual to individual: while some blind people recognize many images, others recognize few. Auditory support can improve the points listed before, even eliminating the need for sighted assistance.

[Fusco and Morash 2015] propose a tactile graphics helper mobile system based on visual finger tracking. QR-codes are used as trigger to the content to be read. When users approximate the finger to one of the markers, they can hear a description regarding the visual element. They also revealed that tactile graphics usually take too much time to be produced, due the lack of assistive content processes available.

In this work, we propose a pipeline using computer vision techniques for augmenting existing surfaces through touch and sound [Teixeira et al. 2016]. Computer vision is adopted in the content preparation phase, speeding up the assistive content elaboration. Conductive material is deposited over the image and sound feedback is given by mapping printed elements and keyboard letters. We show that infant books can be easily transformed into assistive content for visually impaired children using the proposed technology.

The proposed pipeline makes use of four distinct components: a computer, a 3D printer (that uses conductive filament), a webcam attached to it and a Makey Makey board. The later is used to provide audio response based on what user touches. The fabrication process starts with the webcam capturing an existing image placed over the 3D printer's bed. The tool was developed with the OpenCV Library (using C++ language) and receives as input the captured image. Since the camera-printer system was already calibrated, the tool establishes a correspondence between both coordinate systems. The image is then rectified, segmented and it has its contours detected. Each contour defines an interaction zone and maps the touch to a specific sound. In order to optimize the g-code generation, countours are simplified using a polygonal approximation algorithm. Both image segmentation and polygonal approximation can be manually adjusted using the sliders present on the tool developed. The generated g-code solely uses G0 and G1 instructions which corresponds to extruder movement and extruder movement with material deposition, respectively. After sent to the 3D printer, the instructions alow the printing process to be registered with the drawing still placed on the printer (Figure 1). After printed, each contour is connected to an end point in the Makey Makey board and associated to a given sound (Figures 2 and 3).

We are currently testing the system with visually impaired users in order to verify its potential. By printing on both sides of the surface, we will also be able to automatically construct the conductive channel from the contour to the Makey Makey board connector on the backside of the paper. A deep analysis regarding the conductive filament adherence is also being performed. We plan to test different base materials including Paper, Nonwoven, Acetate, Nylon and PVC so we can know which existing content can be adapted by the proposed pipeline.

References

FUSCO, G., AND MORASH, V. S. 2015. The tactile graphics helper: Providing audio clarification for tactile graphics using machine vision. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, ACM, 97–106.

TEIXEIRA, J., BARROS, G., TEICHRIEB, V., AND CORREIA, W. 2016. 3d printing as a means for augmenting existing surfaces. Manuscript submitted for publication (18th Symposium on Virtual and Augmented Reality).

Video

 

 


3D Printing as a Means for Augmenting Existing Surfaces (previous work)

 
João Marcelo Teixeira, Gutenberg Barros, Veronica Teichrieb, Walter Correia

Introduction

A 3D printer has the power of the teleporting. It is possible to scan a piece and reconstruct it several thousand kilometers away. But the pieces do not need to be tridimensional. Bidimensional drawings are some of the most common ways to transmit information and the inaccuracy of the hand drawing and sketches carry the potential of the new casualties which can be perceived and transformed into new realities in alternative generation phase of the problem resolution. This research works in that problem: to reproduce, by printing, the exact shapes and "errors" of the lines as a fundamental requirement to preserve that potential, also to allow the use of other types of drawings. In order to make this happen, the work proposed in this paper is fundamented over two pillars: computer vision [7] and 3D printing [10].

We tackle one of the central problems of computer vision, which is recognition, and through this, we are able to understand the environment and use such information to guide the printing process. 3D printing technology has gained considerable interest due to the proliferation of inexpensive devices as well as open source software that drive those devices [10]. We propose an alternative to design for 3D printing, which is one of the most important challenges in this area. Instead of using traditional CAD software, our system's input is given by 2D drawings (that can be) made by hand on a piece of paper. This way we are able to combine paper prototyping through drawings with 3D printing capabilities, contributing to the 3D printing area.

To the best of the authors knowledge, there were not found any systems composed of a camera or image sensor attached to a 3D printer, so that it is possible to perform registered 3D printing. The systems found are only concerned with transforming 2D content in 3D models for later printing, but not in a registered way.

This work proposes a modification in the way 3D printing content is both modeled and printed. The input comes from 2D drawings, while the printing occurs over the same drawing used before. This way, the original paper containing the drawing is "augmented" by the 3D printing material, since there is a registration between both real and 3D printing coordinates. By attaching a webcam to a 3D printer, we were able to make the entire system understand the drawing by seeing it and follow its contours during the printing process. We suggest distinct application scenarios that can benefit from this promising technology, ranging from architecture professionals to visually impaired people.

Related Work

According to Gebhardt [5], Rapid Prototyping describes a set of technologies used in order to accelerate and automatize the production of physical objects. Such approach emerged in the 80's as a way to generate models and parts for industrial prototypes by material addition. Nowadays, due to its capability of producing complex parts, its use is much more general - from high precision models to artistic objects.

Rapid Prototpying works based on three main steps: 3D model design, object slicing and the 3D printing itself. One of the main uses of 3D Reconstruction is the automatic generation of 3D representations of objects that are difficult to model, speeding-up the model generation process for graphics applications [15] The output of a 3D reconstruction pipeline can ideally replace the first step in rapid prototyping. Instead of using multiple views of an object to acquire its 3D information, single image based approaches can also be used. Different methods for turning 2D drawings into 3D objects are discussed in [9].

3D printing results can be applied to different scenarios [10]. In this work, we focus on enhancing the sense of spatial information through touch, which can derive many solutions for visually impaired users [11]. In [14], a sensemaking platform for the blind, named Linespace, is proposed. A tactile display is designed so visually impaired people can analyze spatial content. Information is placed over a big blue board.

The difference between Linespace and any projective augmented reality solution [2] is how information is placed in the real environment. In the first, there is no impact on drawing a map on different positions of the blue board. In projective augmented reality, content is shown registered to real landmarks, so that a map should be placed superimposing some geometry that fits parts of that map. In order to do such registration between virtual and real information, a calibration approach is performed so that it is found a transformation between the different coordinate systems. The work in [13] illustrates the correct mapping between real and virtual data, using a webcam and a projector to place the content over tangible squares.

If the definition of augmented reality is relaxed, it is possible to trace a parallel between projective augmented reality and the idea proposed in this work. Instead of using projectors to place information, this is done by the 3D printer. Content is printed following the camera vision and registration occurs according to that.

3D Printing

The concept of the 3D printing, as it is known today, has its origins from technologies of material addition in layers for topographic models, around 1890. Since then, it suffered the influence of improvements of new technologies in several fields, as materials and automation. In the 1970s, the first processes similar to those currently used today began to appear, but only when the computers entered to assist and automate the prototyping. In the early 1980s, it could receive the title of 3D printing or rapid prototyping (at least as it is currently used) [17]. In 1987, 3D Systems launched the StereoLithography Apparatus, the first commercial equipment for rapid prototyping.

That technology, resin cured by computer-oriented lasers, still exists and is very used, but several new technologies have emerged since then. Today there are processes based on particles, in liquid and on solid layers of material (plates). Among these technologies, the FFF (Fused Filament Fabrication) is one of the cheapest with fewer requirements to use. It consists in the construction of the part through deposition of fused material, layer by layer. In each layer, the printer head (nozzle) runs all the path applying the filament (material heated until become doughy) on the desired points. After finished, the platform is lowered and the nozzle starts a new layer over the so cooled previous layer.

As advantages, beyond implementation and maintenance costs, it does not need controlled temperature, the material and process have no toxic stages, the materials are relatively cheaper, it operates with low noise, and can reach 0.1 mm of accuracy, besides it can construct 1 mm thick structures with good physical resistance. However, that process needs a larger average time to construct a piece and it is limited to small volumes [8]. Considering that information, it can be said an FFF printer is ideal to have at home or in a small office. That technology was the chosen to this work.

Augmenting Existing Surfaces

In this section, we describe a step-by-step solution that produces 3D printed objects from 2D drawings. The differential of the proposed solution is that the 3D printing process occurs registered to the 2D drawing placed on the table, in a way that the image contours are used as guidance for the printing. In other words, it works as if the 3D printer is able to see the drawing placed on it and then it covers the drawing using the extruded material.

The setup used in the solution is shown in Figure 1 and makes use of three hardware components: \begin{itemize} \item 3D Printer: We currently use the Sethi3D BB 1.75mm, which is one of the biggest 3D printers produced in Brazil [4]. It is possible to print a total volume of 400mm x 400mm x 400mm and it uses PLA (polylactic acid) as printing material in an FFF print process. The 3D printer is connected to the computer using a serial interface over USB. \item Webcam: A Microsoft Lifecam Studio webcam is attached to the 3D printer and connected to the computer using USB as shown in Figure \ref{fig02}. It was chosen mainly due to its 1080p resolution and focus control capabilities. \item Computer: An Intel Core i7 3270QM laptop with 16GB of RAM and running Microsoft Windows 8.1 was used. Since the computation load of the developed application is low (real time processing is not a requirement for the solution), any USB-enabled computer is supported. \end{itemize}

Figure 1: Solution setup.

The proposed solution can be divided into the steps depicted in Figure 2. Each of them will be described as follows. It was developed using C++ language, Visual Studio Professional 2013 as IDE, the OpenCV Library [3] and the same machine configuration previously described.

Figure 2: Follower’s flow diagram.

Figure 3: View of the webcam attached to the 3D printer. In this configuration about 86.5\% of the printing table can be seen by the camera.

The first step, Image capturing, is responsible for providing the input to the solution. Instead of adding a software module for image capturing to the developed application, we allow the user to adopt any webcam capture software available on the computer. In our tests, the Windows Camera application was used to capture the 1080p image of the drawing placed on the 3D printer table. It is important to notice that before taking the photo the 3D printer extruder must be placed in its “home” position, which moves the table to the most visible position for the camera (as shown in Figure 1), reaching the field of view shown in Figure 3.

For simplification reasons, we consider only the 200mm x 200mm central region of the table for printing. Considering the camera attached to the 3D printer is fixed, the calibration between camera and 3D printer must be done just once.

In order to find a relationship comprising both coordinate systems, the coordinates of four known points are used, along with OpenCV’s findHomography function [3]. We have used the G-code [12] in Figure 4 in order to print the four reference points on the table (corners of a 200mm x 200mm square centered on the table), shown in Figure 5. Then, it is only necessary to find on camera’s image the coordinates for the points (100,100), (300,100), (300,300) and (100,300), all in 3D printer coordinates.

The result of the calibration process is a transformation matrix H (Homography), capable of transforming a point $C_i$ from the camera coordinate system to a point $P_i$ in the 3D printer coordinate system, as detailed in equations (1) and (2). The following equation illustrates the relation between $C_i$ and $P_i$. Since we consider the paper should be placed over the 3D printer's bed, the Z coordinate of the point in 3D Printer's coordinate system is always 0 (zero). This way, a homography is enough to represent a mapping between two planes.



Figure 4: G-code used in the calibration between 3D printer and webcam.

Figure 5: Correspondences between camera and 3D printer reference points for calibration. Coordinates in red (below) are given in webcam's coordinate system, while those in green (above) are given in 3D printer's coordinate system.

After determining the Homography transform that will be used to correct the perspective distortion, it can be applied to the input image, giving as result a square image, as shown in Figure 6.

Figure 6: Perspective correction result.

The corrected square image is then binarized using the OpenCV’s threshold function [3], as illustrated by equation (\ref{eq3}). $thresh$ represents the thresholding value used and $v$ is the gray value of the pixel, respecting the $[0,255]$ interval. Since the input image is generated from a real scenario, it is difficult to determine the best threshold parameter to be used. This way, the application allows the user to select the best threshold value in order to better isolate the drawing. Figure 7 shows three different results of the input image thresholded with varying parameters.

\begin{equation} bin(v) = \left\{ \begin{array}{rl} 255 &\mbox{ if $v > tresh$} \\ 0 &\mbox{ otherwise} \end{array} \right. \label{eq3} \end{equation}

Figure 7: Input image thresholding. A) Threshold = 64; B) Threshold = 128; C) Threshold = 170.

In sequence, the image contours are detected in the Contour Detection step using OpenCV’s findContours function [3]. This function returns a list containing all contours detected on the image. A contour means a list of 2D points (X and Y coordinates) in a specific order so that by connecting all points on the list (including the last point of the contour to the first one, closing this way the contour), the original contour is retrieved.

The first step in the contour detection is to identify a valid pixel (in our case, pixels with black color) and then verify if its neighbor has the same color. In order to do that, two approaches can be applied: 4-connectivity and 8-connectivity [16].

In the 4-connectivity approach, only four neighbor positions are searched: north, south, east and west positions. The pixel being analyzed is positioned on the origin of the presented coordinate system, as shown in the left of Figure 8.

Figure 8: Different connectivity approaches that can be used for contour tracing: on the left, 4-connectivity; on the right, 8-connectivity.

In the 8-connectivity approach, eight neighbor pixels are analyzed, considered the four from the previous approach plus the diagonal neighbors as well. The right part of Figure 8 illustrates the 8-connectivity approach. OpenCV’s findContours function is based on the 8-connectivity approach since it is more complete and better represents real scenarios.

Each pair of consecutive contour points results in a line segment that could be directly translated into G-code. In order to diminish the amount of G-code generated, we perform a polygonal approximation step using OpenCV’s approxPolyDP function [3]. This also makes use of a threshold in order to inform how much is the approximation to be performed. As happened to the binarization procedure, we also allow the user to select the adequate approximation to the 2D drawing. Figure 10 shows two different results of the approximated polygons with varying parameters.

The polygonal approximation works in an iterative way, by constructing line segments with polygon points from the contours found and calculating the distance from these segments to all other polygon points. Consider a segment $S_1$ composed by points $P_1$ and $P_2$. For each point $P_i$ pertaining to the polygon, if the distance from $S_1$ to $P_i$ is higher than a certain $pat$ (polygonal approximation tolerance), two new segments $S_2$ and $S_3$ are created, in which $S_2$ is composed by points $P_1$ and $P_i$ and $S_3$ by points $P_i$ and $P_2$. Then, $P_i$ is added to the result vector containing all points from the approximation. In case, the distance is shorter than $pat$, the point $P_i$ is discarded. Figure 9 illustrates these steps.

Figure 9: Polygonal approximation steps.

Specifically, in the polygonal approximation algorithm, points on the extremities are connected by a straight line as a rough and initial approximation of the polygon. Then the complete polygon is determined by computing the distances of all intermediary vertices to the segment being analyzed. In case all distances are shorter than $pat$, the algorithm stops and the resulting point vector is composed by the points used on all segment traces, as showed in Figure 9.

Figure 10: Results for the polygonal approximation with different polygonal approximation tolerances. The simplification is more adequate for drawings composed by a large amount of straight lines and reduces the number of g-code commands.

After selecting the best-suited parameters for both binarization and polygonal approximation, the G-code is generated in the next step. The code is formed by appending the printing information regarding the 2D drawing to a specific header, similar to the one in Figure 4. As default, three layers of 0.1mm height are printed over the drawing, but this value can be modified to increase the drawing-extrusion procedure. For each polygon that was approximated, a list of 2D points is read. We make use of the G0 instruction in order to move the extruder to the first polygon point and for each pair in sequence, the G1 instruction deposits the material over the approximated line segments.

In order to check the correctness of the produced printing instructions, the G-code generated can be visualized in the Repetier-Host application [6], as shown in Figure 11. After correcting or configuring the 3D printer parameters, the code is sent to the printer through the same application. The final printed result is shown in Figure 12.

% It can be saw some aditional lines

Figure 11: Repetier-Host application used for visualizing the generated printing instructions.

Figure 12: Final printing result obtained by following the complete steps proposed.

Application Scenarios

In order to illustrate some of the application scenarios related to the proposed solution, three different prototypes are discussed. The first two are related to the accessibility context of visually impaired people while the last provides an interesting tool for architecture professionals.

Blind individuals rely on their sense of touch for pattern perception, much as the rest of us depend on vision. If a blind person has extra training in the use of touch for tasks such as Braille or spatial orientation, then we might expect increased skill as a consequence [11]. The research area regarding pictures and pattern perception by blind people has been controversial, with some researchers emphasizing difficulties in the production and interpretation of tangible pictures. Many researchers have pointed out the difficulties involved in translating 3D information to a 2D display [1], while others have noted that it may be easy to name visual pictures.

The first prototype, shown in Figure 13, uses a common 2D drawing made by hand. The same technique can be applied to art pieces, for example. In this case, the drawing contours are highlighted using the 3D printing material, in a way that it is possible for the user to feel the boundaries and shapes pertaining to the image.

Figure 13: 3D printing result regarding the art application scenario.

The second prototype transforms an already existing printed material, for example, an infant book, into an accessible media for blind children, as shown in Figure 14. The page of the book is placed on the 3D printer table, and the contours of the dog are selected to be highlighted as well. It is important to notice that this selection had to be done manually, directly on the input image, since there is no editing tool available yet. It would also be possible to print on the page the Braille-related to the text. That way, the story could be read and the visual representations could be felt.

Figure 14: 3D printing result regarding the children book scenario.

The third prototype makes use of the blueprint of a building and generates the corresponding 3D model according to it, as shown in Figure 15. In this case, the extruding height was altered to 2.5 cm, since the 3D printing should be high enough to represent the walls. In the other prototypes, we have used a height of 0.3 mm, which was reached by overlaying three 0.1 mm layers, and was enough to give the tactile impression.

Figure 15: Architecture scenario 3D printing result.

Table 1 provides more details about the prototypes regarding drawing area, the amount of material used to print and estimated printing time for each one of them.

Table 1: Details regarding the printed prototypes.

Conclusion

In this paper, a system capable of performing registered 3D printing was proposed. The system works in a similar way to projective augmented reality, in which the real world reference must be first acquired (through markers or other features) and then the light is projected accordingly. Instead of light, printing material (PLA) is used.

Despite the high resolution webcam used (1080p), we believe that lower resolution cameras would also be able to provide enough precision for the 3D printer to correctly follow the contours of the drawings used as input.

One of the advantages of the proposed system is that it can be applied to any 3D printer model that works in FFF print process (one of the less expensive in the market), since only basic G-code operations are used on the printer commands. For different 3D printer models, the calibration between webcam and 3D printer must be redone.

Different opportunities arise from this work, as showed in the application scenarios. Amongst them, the visually-impaired people are the ones who can benefit a lot from the proposed system.

From this point on, different work directions can be pursued. One of them is to validate the listed application scenarios with users to perceive the real benefit obtained from the use of the system. The complexity of the printing can be explored as well, providing filling along with the contour printing.

In order to create products based on the proposed solution, we must study material properties regarding the paper (or another material) to be used as printing base. According to the target application, the adherence of the media must be carefully chosen so that the printing does not detach from it, even if in situations of high interaction (such as in the children book, in which the kids are supposed to have intense and frequent contact with the printed pages).

More complex operations should be supported, besides simple extrusion of the contours. For example, printing height can be correlated to the color of the drawing.

It would be interesting to further support the use of different materials for the printing process. At last, the developed tool should be improved in order to present a more interactive interface, allowing the user to edit the capture drawing, for example.

Acknowledgements

The authors wish to thank Vinícius Gomes for allowing the use of his character illustration for experiments and Leila Barros for the help with the children book illustration, and the architectural drawing, both also used in experimentation. This work was supported in part by a grant from [OMMITED DUE TO BLIND REVIEW].

References

[1] E. Axel and N. Levent. Art Beyond Sight: A Resource Guide to Art, Creativity, and Visual Impairment. AFB Press, 2003.

[2] O. Bimber and R. Raskar. Spatial Augmented Reality: Merging Real and Virtual Worlds. A. K. Peters, Ltd., Natick, MA, USA, 2005.

[3] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000.

[4] S. I. C. P. Eletrônicos. Impressora sethi3d bb - 1.75mm, 2016.

[5] A. Gebhardt and A. Gebhardt. Rapid prototyping. Munich Hanser, 2003.

[6] H.-W. G. . C. KG. Repetier software, 2016.

[7] J. Malik, P. Arbeláez, J. Carreira, K. Fragkiadaki, R. Girshick, G. Gkioxari, S. Gupta, B. Hariharan, A. Kar, and S. Tulsiani. The three r’s of computer vision: Recognition, reconstruction and reorganization. Pattern Recognition Letters, pages –, 2016.

[8] C. E. S. C. S. C. MELLO, C. H. P.; SILVA. Comparação de três diferentes tecnologias de prototipagem rápida em relação a critérios de custo e tempo. In Anais do XXVI Encontro Nacional de Engenharia de Produção, 2006.

[9] J. Micallef. Beginning Design for 3D Printing. Apress, Berkely, CA, USA, 1st edition, 2015.

[10] W. Oropallo and L. A. Piegl. Ten challenges in 3d printing. Engineering with Computers, 32(1):135–148, 2015.

[11] T. Prescott, E. Ahissar, and E. Izhikevich. Scholarpedia of Touch. Scholarpedia. Atlantis Press, 2015.

[12] RepRap. G-code - reprap wiki, 2016.

[13] R. A. Roberto, D. Q. de Freitas, F. P. M. Simões, and V. Teichrieb. A dynamic blocks platform based on projective augmented reality and tangible interfaces for educational activities. In Proceedings of the 2013 XV Symposium on Virtual and Augmented Reality, SVR ’13, pages 1–9, Washington, DC, USA, 2013. IEEE Computer Society.

[14] R. K. D. S. S. M. P. B. Saiganesh Swaminathan, Thijs Roumen. Linespace: A sensemaking platform for the blind. In Proceedings of the 34rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’16, 2016.

[15] F. Simões, M. Almeida, M. Pinheiro, R. D. Anjos, A. D. Santos, R. Roberto, V. Teichrieb, C. Suetsugo, and A. Pelinson. Challenges in 3d reconstruction from images for difficult large-scale objects: A study on the modeling of electrical substations. In Virtual and Augmented Reality (SVR), 2012 14th Symposium on, pages 74–83, May 2012.

[16] M. Sonka, V. Hlavac, and R. Boyle. Image Processing, Analysis, and Machine Vision. Thomson-Engineering, 2007.

[17] N. Volpato. Prototipagem rápida: tecnologias e aplicações. Edgard Blücher, 2007.

Video