Efficient way of adding large meshes through the Python API

Scripting in Blender with Python, and working on the API

Moderators: jesterKing, stiv

Post Reply
paulm2
Posts: 3
Joined: Tue Oct 01, 2013 10:26 am

Efficient way of adding large meshes through the Python API

Post by paulm2 » Tue Oct 01, 2013 1:15 pm

Currently the way to create meshes in Python scripts with bpy seems to be to use Mesh.from_pydata(). Although this is fine as a high-level API it isn't very scalable. It requires putting geometry in nested Python data structures, which has quite some time and space overhead for large meshes. Is there something like a numpy equivalent API?

The use case I have is to read large meshes (say 10-20M polys each) from binary files and create Meshes for them. Using the supported importable file formats isn't really an option as the (text) files become huge (0.9G for a single mesh in X3D format, times 800 timesteps) and parsing them by the Python based importers is dog slow. Furthermore, creating text files doesn't add anything as I can read the meshes directly from binary files if Blender's python interface supported something like passing a contiguous block of memory representing vertices in V*3 floats and triangles in T*3 ints.

Has any work been done in this direction? How hard would it be to extend the Python API?

stiv
Posts: 3645
Joined: Tue Aug 05, 2003 7:58 am
Location: 45N 86W

Post by stiv » Tue Oct 01, 2013 6:35 pm

Not surprising that a convenience function falls down when pushed passed its design limits.

You might take a look at the HIRISE DTM importer. It creates terrain meshes from a big data file.

CoDEmanX
Posts: 894
Joined: Sun Apr 05, 2009 7:42 pm
Location: Germany

Post by CoDEmanX » Wed Oct 02, 2013 1:35 am

Here's a cube mesh added with low level api, should have better performance than the operator:

Code: Select all

import bpy
from bpy_extras.io_utils import unpack_list, unpack_face_list

coords = (
(-1.0, -1.0, -1.0),
(-1.0, 1.0, -1.0),
(1.0, 1.0, -1.0),
(1.0, -1.0, -1.0),
(-1.0, -1.0, 1.0),
(-1.0, 1.0, 1.0),
(1.0, 1.0, 1.0),
(1.0, -1.0, 1.0),
)

faces = (
(4, 5, 1, 0),
(5, 6, 2, 1),
(6, 7, 3, 2),
(7, 4, 0, 3),
(0, 1, 2, 3),
(7, 6, 5, 4),
)

len_faces = len(faces)

me = bpy.data.meshes.new('Cube')
me.vertices.add(len(coords))
me.vertices.foreach_set("co", unpack_list(coords))

def get_loop_starts():
    arr = []
    i = 0
    for f in faces:
        arr.append(i)
        i += len(f)
    return arr
        
def get_loop_totals():
    return list(map(lambda f: len(f), faces))

me.loops.add(sum([len(f) for f in faces]))
me.loops.foreach_set("vertex_index", unpack_face_list(faces))

me.polygons.add(len(faces)) 
me.polygons.foreach_set("loop_start", get_loop_starts())
me.polygons.foreach_set("loop_total", get_loop_totals())
#me.polygons.foreach_set("vertices", unpack_face_list(faces))

me.update(calc_edges=True)
me.calc_normals()
#me.validate(True)

ob = bpy.data.objects.new(me.name, me)
bpy.context.scene.objects.link(ob)
bpy.context.scene.update()
I'm sitting, waiting, wishing, building Blender in superstition...

paulm2
Posts: 3
Joined: Tue Oct 01, 2013 10:26 am

Post by paulm2 » Thu Oct 03, 2013 1:24 pm

stiv wrote:Not surprising that a convenience function falls down when pushed passed its design limits.

You might take a look at the HIRISE DTM importer. It creates terrain meshes from a big data file.
Looked at the code of the importer and it seems to use Mesh.from_pydata, unless I'm missing something

paulm2
Posts: 3
Joined: Tue Oct 01, 2013 10:26 am

Post by paulm2 » Thu Oct 03, 2013 1:27 pm

CoDEmanX wrote:Here's a cube mesh added with low level api, should have better performance than the operator:

...
But again, this first forces you to put all geometry into Python datastructures, which is inefficient and exactly what I hope to avoid. The data is already in m emory in compact binary format (Numpy arrays), there should be need to first unpack it (or even make another duplicate with unpack_list()) to get the data into Blender.

PS unpack_face_list only seems to work for quads, not triangles

CoDEmanX
Posts: 894
Joined: Sun Apr 05, 2009 7:42 pm
Location: Germany

Post by CoDEmanX » Thu Oct 03, 2013 5:06 pm

you can't keep the data in numpy arrays either way, but sure, it would be more efficient to convert it directly to blender's data structures.

Not sure about unpack_face_list, never really used my own script. I believe I used a former version of this script to generate it:

Code: Select all

import bpy

ob = bpy.context.object
if ob is None or ob.type != 'MESH':
    raise TypeError("Mesh object required")

me = ob.data

if not me.tessfaces and me.polygons:
    me.calc_tessface()

# Adjust this path!
file = open("D:\\house.py", "w")

file.write("import bpy\n")
file.write("from bpy_extras.io_utils import unpack_list, unpack_face_list\n")
file.write("\n")
file.write("coords = (\n")

for v in me.vertices:
    file.write("\t(%f, %f, %f),\n" % v.co[:])
    
file.write(")\n")
file.write("\n")

file.write("faces = (\n")

for f in me.tessfaces:
    fv = f.vertices
    if len(fv) == 3:
        tris = (fv[0], fv[1], fv[2]),
    else:
        tris = (fv[0], fv[1], fv[2]), (fv[2], fv[3], fv[0])
        
    for tri in tris:
        file.write("\t%s,\n" % str(tri))

file.write(")\n")
file.write("\n")

file.write("texFaces = (\n")

uv_tex = me.tessface_uv_textures.active.data
for f in me.tessfaces:
    if len(f.vertices) == 3:
        tris = (0, 1, 2),
    else:
        tris = (0, 1, 2),(2, 3, 0)
    
    uv_face = uv_tex[f.index]
    for tri in tris:
        file.write("(")
        file.write(", ".join("(%f, %f)" % (uv_face.uv[i][:]) for i in tri))
        file.write("),\n")

file.write(")\n")
file.write("\n")


script = """
me = bpy.data.meshes.new('obj01')
me.vertices.add(len(coords))
vertices_flat = [vv for v in coords for vv in v] 
me.vertices.foreach_set("co", vertices_flat) 

#mshobj.edges.add(0) 
#mshobj.loops.add(sum((len(f) for f in faces))) 

me.tessfaces.add(len(faces)) 
me.tessfaces.foreach_set("vertices_raw", unpack_face_list(faces))

uv_tex = me.tessface_uv_textures.new(name="officialUV")

for i, face in enumerate(faces):
    tface = uv_tex.data[i]
    tface.uv1 = texFaces[i][0]
    tface.uv2 = texFaces[i][1]
    tface.uv3 = texFaces[i][2]

me.validate()
me.update(calc_edges=True) 
me.calc_normals() 

ob = bpy.data.objects.new(me.name, me)
bpy.context.scene.objects.link(ob)
bpy.context.scene.update()
"""

file.write(script)
file.close()
It triangulates by intention, but i'm sure both scripts could be made working for polygons.


If you really need high performance, there's no way around writing a native importer. Direct memory access with python seems dangerous, but I agree that API functionality to load geometry from memory would be useful.
I'm sitting, waiting, wishing, building Blender in superstition...

stiv
Posts: 3645
Joined: Tue Aug 05, 2003 7:58 am
Location: 45N 86W

Post by stiv » Thu Oct 03, 2013 6:49 pm

Some utilities or adaptor classes to interface with numpy arrays would be handy things - whether pure python or as a much faster C extension.

Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests