View allAll Photos Tagged MeshLab

Modeled in TopMod. Sub-divided in meshlab to save time. For some reason meshlab can sub-divide in less than a second when TopMod takes 5 or more minutes. Rendered in Sunflow.

Mandelbulber2 -> .ply -> Meshlab -> .obj -> Cinema 4D + Arnold -> .tiff -> Photoshop + Nik Collection -> .jpg

[Bonnie Blaze] KOATED X DREAMDAY

 

Will be available at the @ Dreamday Event RIGHT NOWWW ♥♥

@dreamday.sl @ 3PM SLT !!!!

 

Like, repost & tag your friends to win the Bonnie Blaze Full set Set + Shirt

 

I'll be picking 3 winners randomly tomorrow but you MUST TAG A FRIEND AND REPOST :hearts:

 

----------------------------------------------

 

4 Single color sets includes :

 

Outfit [Top & Bottoms]

Half Shirt [Single]

 

---------------------------------------------

 

Fatpack Includes :

 

Top, Bottoms & Half Shirt

 

12 Solid colors

 

6 Jean Colors

 

Option for opacity !!!

 

---------------------------------------------

 

Sizes :

 

Peach, Kupra, Ebody, Legacy & Maitreya !!

 

#secondlife #secondlifeavi #secondlifeonly #secondlifestyle #secondlifecreator #secondlifeclothing #blender3d #blendercommunity #blendertutorial #blender3dmodelling #youtubechannel #MeshLab

using mirrors to turn a single kinect into 5 kinects, scanning all sides of an object simultaneously.

 

you can't really do it in a single frame, you have to move an aperture around in front of the ir projector and build up a depth image. otherwise there is too much interference from the reflections of the projection.

As you can see, things work a lot better than in the previous version. Not all though, so I expect to be doing a third and hopefully final prototype soon.

Sweet: Managed to get the normalmap.gdp shader from MeshLab working in Processing. Crimes against good taste may commence. I'm using Andres Colubri's GLGraphics library to handle the shader. Thanks to Greg Borenstein for his sample code, which is available on GitHub.

Playing with Structure synth yesterday I came across a few quite interesting things.

So far I was using the template exporters to get some renders done in Sunflow or POV ray but both only offer text editors to position cameras and light - no interactive way like Blender.

By the way I don't use Blender because my computer doesn't like it - a Blender render has a 1 in 3 chance to crash my computer - something like dodgy memory ?

So I was looking for other ways and there is quite a nice one.

  

You can actually paste the complete Eisen script from Structure synth into MeshLab.

Not under open or import but straight under Filters->Create New Mesh Layer is the point 'Structure Synth Mesh creation'

Select the Eisen script and copy it (Ctrl-c) - go to MeshLab to Filters->Create New Mesh Layer ->Structure Synth

This will open up a small input box - click in there - press Ctrl-a to select all and Ctrl-v to paste your script.

I don't change any other values.

This will create the structure in MesLab and you have many different ways to export it from there.

  

I exported it to a VRML (.wrl) because I wanted to put some lighting in via PoseRay.

So import the wrl file there and setting up lights and materials just became a lot easier than text based in the POV ray editor.

  

Then I was curious what the 'Kerkythea Output' is. Turns out that Kerkythea ( www.kerkythea.net) is a freeware renderer I have never heard about.

Downloaded it and a few resources like Materials and skies and it looks actually quite OK.

So in PoseRay I exported via the Kerkythea output to Standard XML (the KT zipped XML did not work for me) and imported that one to the Kerkythea renderer.

After playing around for about an hour I found some lights, materials and sky I was happy with and left it for another 3 hours to render with Bidirectional Path tracing.

  

So long story short here is the result of this totally free way of creating some quite nice renders.

2d black white image generated with processing, the color coded to generate a 3d point cloud with processing, applying 3d delaunay triangulation with tetgen then rendering in meshlab

3D print of a dataform based on 365 days of Canberra weather data (July 08 - June 09). Daily minimum and maximum temperature generate the profile of the outer edge; the holes show rainfall per week. Model generated with Processing, boolean operation in Blender, cleaned in Meshlab, printed by Shapeways. I'll be showing this piece in the Beginning, Middle, End exhibition at ANU School of Art Gallery, 18-24 September - www.bmefestival.com

Test for full color 3D prints, made with Processing + HE_Mesh.

Preparing more classes for the upcoming toxiclibs release to be used in the workshop @ Sheffield Hallam Uni (SIAD) next week. Meshlab screenshot of an exported binary STL file created with the new volumetric IsoSurface class and showing the treshold surface of a 3D MRI scan taken from volvis.org with a normal map applied for checking the mesh...

 

demo source code

Comparing my own Catmull-Clark vs Doo-Sabin implementations (after 3x recursion)

 

A group Bronze Age hut circles on Shapley Common, Dartmoor, Devon, UK. These huts were occupied by farmers around 3500 years ago. There are a total of five hut circles at this location with diameters ranging between five and nine metres. The huts have either a single or double layer of upright stones that form the walls. The northernmost hut circle lies within a small enclosure. A later linear bank passes through the site crossing the enclosure and burying the side of one of the hut circles.

 

Vertical image derived from 3D Model created using Agisoft Photoscan and Kite Aerial Photographs. The top image shows the site coloured and textured with the lighting which was present on the day, the lower image has colour removed and a little artificial lighting direction to help aid interpretation of features.

 

Grid Reference (of hut circle in enclosure) : SX 69782 83138

 

Kite Aerial Photograph

 

11 April 2015

A small cup-like form derived from 150 years of Sydney temperature data. Treating the form as a stack of rings, each ring represents one year's temperature, with the months arranged radially. The higher the temperature, the further the surface is from the center. Rings are stacked from bottom to top: at the bottom of this form is data from 1859 - at the top, 2009.

 

Data points are smoothed using a moving average with a five-year span, in order to make the form printable. The data is sourced from the UK Met Office HadCrut3 subset.

 

The form was generated in Processing, cleaned up in Meshlab and printed by Shapeways in their "Strong White Flexible" material.

Comparing my own Catmull-Clark Clojure implementation with that of Blender (2.69) - notice "patchwork" effect in latter...

Comparing my own Catmull-Clark Clojure implementation with that of Blender (2.69) - here the hexagon junctions differ most

Comparing my own Catmull-Clark vs Doo-Sabin implementations (after 3x recursion)

made this today for Shapeways SIGGRAPH contest. printing this costs about $180 bucks on shapeways... just squeezing under the $200 limit on the contest.

A simple 3d-lattice Eden growth algorithm; geometry exported from Processing with SuperCAD, then rendered with MeshLab

Step #1 - HanSoloCarbonify Yourself ©

 

Spent some time last night with Pete Prodoehl at the Milwaukee Makerspace playing with taking photos with the Kinect and then printing them out on the Makerbot Cupcake. We also used MeshLab to soften the 3D image before going to print to give the print a smoother finish.

Preparing more classes for the upcoming toxiclibs release to be used in the workshop @ Sheffield Hallam Uni (SIAD) next week. Meshlab screenshot of an exported binary STL file created with the new volumetric IsoSurface class and showing the treshold surface of a 3D MRI scan taken from volvis.org with a normal map applied for checking the mesh...

 

demo source code

Some more snaps from earlier in the week showing my students what's possible with ~10 lines of code

There's background on the B-23 and how it crashed here:

www.flickr.com/photos/flickrdave/3925755985

 

Here's the story on the image itself. Back in 2009 I attended an aviation archaeology field school. (Long, stupid story....) Anyway, I had my pole aerial photography gear with me and a couple of us thought a good way to survey and document the site would be to take overhead shots of everything. We were hoping to stitch them all together to make a photo-realistic map of the site. The images we got were interesting but, unfortunately, we weren't able to combine all the images into the map we wanted. One software package I had heard of was too expensive for us and another that I did get my hands on was too fussy and wouldn't work with our raw images.

 

A few days ago, though, I learned about a new web site called Hypr3d.com that lets you upload images and it creates 3D models from them. I didn't mess around. I uploaded all 145 clear images we had of this site and let it chew on that overnight. By morning it had produced a rough model I could view in my browser and a higher resolution one that I could download to my PC. I used another application, Meshlab, to rotate it around to the views I wanted and had it render these two images. Both of these views are rendered in "Ortho" mode so there is no perspective. I think that is why it looks less like a photo and more like a painting or model. So, two years after field school ended, I finally have my orthographic projection of the whole site.

 

In retrospect, I would do this a little differently if I went back. Because all of our images were shot straight down, more or less, there isn't enough information about the vertical faces of the wreck. This shows up as a lot of distorted shapes on the front and sides. We should have taken more oblique shots from the perimeter looking inward. But when you look at the results from an overhead point of view, it looks damn near perfect.

 

More stuff: a Blog Entry.

This model on Hypr3d.com

Representações simbólicas da caetra dos guerreiros galaicos do I Milénio a.C.

 

[1] Fotogrametria 3D da estátua do guerreiro galaico de Armea.

Museo Arqueolóxico de Ourense.

[2] Moedas romanas com representação da caetra galaica no reverso. Museo Arqueolóxico e Histórico do Castelo de San Antón. A Corunha.

[3] Petrificação em relevo da caetra galaica, actualmente no santuário romano de Pietrabbondante, Molise, Itália. (DAI Rom Inst. Neg. 75.2648).

[4] Fragmento de friso dórico do periodo republicano tardio com representação de caetra galaica procedente dum prédio do papa Sixtius IV da Via Flamínia em Roma e conservado actualmente no Musei Capitolini (Inv. Musei Capitolini 2262/S; DAI Rom Inst. Neg. 29.141).

 

NOTAS:

Fotogrametria 3D da estátua do guerreiro galaico Armea, em exposição temporal no Museo das Peregrinacións de Santiago até o 28 de Maio de 2017 - goo.gl/gTNjw2 - goo.gl/fzZehu - museoperegrinacions.xunta.gal/gl/content/na-procura-do-pa...

 

Renderização 3D da estátua do guerreiro galaico e Armea com Photoscan, Meshlab e luz ambiental, 2017 - © Diego Torres Iglesias - © Património Nacional Galego - © Museo Arqueolóxico de Ourense - © Museo das Peregrinacións de Santiago de Compostela. Agradecimento especial à equipa do Museo das Peregrinacións de Santiago. © Musei Capitolini, Roma, Itália.

 

MAIS INFO:

Freijeiro, A. B. (1971). Monumentos romanos de la conquista de Galicia. Habis, (2), 223-232.http://goo.gl/YmJDva

 

Cebreiro Ares, F. (2013). La emisión de sestercios del Noroeste a la luz de un nuevo hallazgo. SAGVNTVM. Papeles del Laboratorio de Arqueología de Valencia, 44, 203-206. goo.gl/xCEDp8

 

goo.gl/BxWy0V

 

www.musarqourense.xunta.es/es/peza_mes/moeda-da-caetra/

Drawing with the 3D brush with bi-directional spin along one axis and radius modulation...

@Getmakered goes to OSCON in Portland Oregon; has tons of fun; scans everyone; hosted an OSCON for Kids workshop on 3D Design & Printing; enjoyed ourselves immensely and can't wait for Austin next year!

 

combining the far infrared image from a handheld FLIR camera with the depth image from a kinect... brainstorming about multicamera alignment... really nice for tracking people in 3d.

Low poly mesh created in MeshLab, rendered with lighting in Vue Esprit, motion blurred, distorted and coloured in Photoshop CS6.

brainstorming about multicamera alignment... combining the far infrared image from a handheld FLIR camera with the depth image from a kinect... really nice for tracking people in 3d.

My low poly version of Doug. Created from a 3D scan and modified using Netfabb, Meshlab and Skanect. Available on Thingiverse.

 

johnbiehler.com

3D rendering of the icicle in run 111023 frame 200. The mesh is shown on the left in meshlab, an image of the actual icicle is shown on the right.

  

The 3D form is reconstructed from the detected edges in the 8 images following image 200.

  

See

www.physics.utoronto.ca/Icicle_Atlas

TMR posited it might take half an hour to make a d20 in Sketchup... it took 35-40 minutes, including figuring out a good process.

 

For the record, I used TopMod to generate the original twenty-face regular solid (it\'s an icosahedron, I believe, but my memory may be faulty), and I also cleaned up in Meshlab (couple clicks just to make sure the mesh was ok).

 

It Skeined great, at least for the jumbo size dice.

3d print from zcorp & i.materialise

 

data originally from a dental tomography scan

processed in osirix and meshlab

 

dimensions: 27x21x20 cm

 

current on display here:

www.devishal.nl/

Low poly mesh created in MeshLab, rendered with lighting in Vue Esprit, motion blurred, distorted and coloured in Photoshop CS6.

1 3 4 5 6 7 ••• 10 11