• Runtimes
  • Sample code of uasing SpineC lib from swift and using Meatl for rendering

I would like to share sample code of how to use SpineC runtime lib from Swift and render Skeletons with Metal.
Here is repo - Github.
I implemented for my project and finally found time to separate some code into package to share it with others.

It has few limitations, whcih descried in issues. For example: blending modes are ignored.

Related Discussions
...

This looks cool, thanks for sharing!

Can you embed the Metal rendering in Swift UI? How heavyweight is doing that? It'd be cool to see a test of say 100 separate Swift UI views that are each rendering a Spine skeleton using Metal.

    Yeah, this is implemented as a subclass of MTKView, Apple's based Metal view component. In theory, one could instantiate a few hundred of those, put them in a list and see how it performs.

    Nate I'm not sure that I undestand your question correctly. We have 2 things effecting performance: SpineC lib on every frame + rendering.
    You want to use the same frame info from frame SpineC lib and just duplicate rendering to as many metal view as possible? Am I understand correctly?

    I'm curious how many individual views that are rendering a moderately complex skeleton (eg spineboy) you can have in a Swift UI before performance becomes unacceptable. Each view would do the spine-c processing and rendering.

      10 days later

      Nate Sorry for late reply. I did try your request partially. I didn't use SwiftUI in project, because project form which I did cut this package - UIKit based.
      But I did try multiple renderings at the same time. And My iPhone 13 Pro Max start droping frames around 21. And If I render 100 I got 12 fps. And with both tests fps drops with time, as device heating. I did test with the same alien skeleton. Can add ontehr one, if you giveme link to spine project.

      I also did play with texture caching. Because by default, every skeleton loading will create texture for those skeletons. And it looks like it didn't have effect. even when I render 100 views. So doesn't matter - do I load 1.7GB of textures or 100Mb.... performance the same, which is curious for me.

      I did push test branch to git repo - tests/show-multiple-views. Here is line where you can change desired number of views.
      And byt commenting this line you can disable texture cahing.

      I hope it answer your question about performace. And in this test every view do spine-c processing and rendering.

      P.S. I can see that SwiftUI would allow me to also play with perfoamce on macOS, but now I have what I made for pure iOS project couple years ago. May be later I will introduce SwiftUI option too.

      Super cool, thanks for that! We'll check it out!

        Nate After some thinking, I actually was surprised with such bad performance. And I realized that I did run app in debug ... 🤦 I updated branch with running in release and did some profiling. Here are new results:

        • If I run 100 views in release on my iPhone 13 Pro Max, I actually get 60fps
        • If I run 200 view - I get 30 fps.

        Profiling shows that majority of time was spend on CPU. And I actually think one of main reasons - because I changing vertices order - instead of using provided map, I'm ordering it that it, so would be 1-2-3 then 4-5-6... and looks like this is bottleneck when we render 200 views.

        Also each view render only texture size which it occupy on screen. I take view size, multiply to screen scale and and this is target for restorization. So the smaller views are, the less work for restorizer. The rest of work should bethe same tho.

        Also, while I have your attention. May be you can help me with smth?
        While I running on release, I'm getting crash from time to time. And it happen at random time or don't happen at all. But laways with the same buffer/pointer - uv.

        Am I correct to assume that spMeshAttachment.uvs size is always 2 * spMeshAttachment.uvs.trianglesCount? I'm getting crash on memcopy of this pointer, and error says that I'm not allowed to read last few bytes in uv buffer. Which mean one of 2 things: Or I calculate size wrongly or I read after free. Last one should not happen ideally....

        Ah, interesting on the performance boost. It makes sense posing so many skeletons would take a lot of CPU, depending on skeleton complexity. If you only pose and render the skeletons on screen then of course there will only be a couple on screen at once and performance would be excellent, so I think it's most useful to see performance without culling. It seems that there is basically no problem with any reasonable amount Spine animations. Super cool!

        I think the number of UVs is not based on the number of trianlges, but on mesh.getWorldVerticesLength * 2. Can you point out the code in question?

          Nate Here is permalink to code, but not sure how it will help, without long and deep digging in the rest of project.

          This is already modified code. And looks like it should be just worldVerticesLength, because verteces are also 2 floats - x+y, as well as UV itself.

          Also I found couple lines in c code, which pushed to this assumpton. Lines from MeshAttachment.c

          uvs = self->uvs = MALLOC(float, verticesLength);
          memcpy(copy->uvs, self->uvs, SUPER(self)->worldVerticesLength * sizeof(float));

          I need to create buffer on UV, because it is simpler to work with typed buffer, and because I need to copy a lot of data for rendering. I'm collecting data for whole skeleton to single render command.(aka combining all bones in 1 draw call)

          Also address sanitizer in xcode confirmed that SpineC allocated sizeof(Float) * worldVerticesLength, and not 2 * sizeof(Float) * worldVerticesLength. Not sure why I dind't use it in the first place... it even says where memory was allocated and how many bytes were allocated... 😅

          • Nate replied to this.

            kolya-j but not sure how it will help, without long and deep digging in the rest of project.

            True, I was hoping for something simpler to read. 😉

            You're right, getWorldVerticesLength is already * 2. The "length" name tries to convey this, eg versus "count", but I still manage to mix it up.