Jump to content
GreenSock

Search In
  • More options...
Find results that contain...
Find results in...

swampthang last won the day on June 19 2016

swampthang had the most liked content!

swampthang

BusinessGreen
  • Content Count

    360
  • Joined

  • Last visited

  • Days Won

    1

swampthang last won the day on June 19 2016

swampthang had the most liked content!

Community Reputation

128 Specialist

About swampthang

  • Rank
    Advanced Member
  • Birthday September 11

Profile Information

  • Gender
    Male
  • Location
    Louisiana

Recent Profile Visitors

3,679 profile views
  1. I was making this way more complicated than it should have been. The issue was the width and height relative to the viewBox. For now, this works: https://codepen.io/swampthang/pen/mdPWXwv?editors=1010
  2. Thanks for the reply, @ZachSaucier. I'm trying to animate the g element from just outside the stage area on the right (the dashed lines) to its current position. Then animate it to just outside the stage area on the left. In my app, I'm stuck (at this time anyway) with < version 3 so can't really leverage those methods.
  3. I'm trying to figure out how to calculate the x position for a g element that would animate in from just "Off-Stage" (right, left, top, bottom) in an embedded SVG that is inside a div container which is scaled. When you change the container's scale setting (zoom) everything changes. Also, the original SVG was 640 x 640 but I resized it to 186 x 186 in this example which adds to the confusion for my feeble noggin. The reason for all these differences is a user can zoom the stage container in and out as well as resize the embedded SVG so I need to accommodate those changes. Anyone have any idea how to get this to work?
  4. Thanks for the clarification, @OSUblake. Really appreciate your weighing in.
  5. Awesome, @ZachSaucier! I added it to this fork. https://codepen.io/swampthang/pen/GRZJyYy I had tried dynamically adding everything in that fork yesterday thinking it might make a difference but it didn't. So, I implemented your suggestion by adding a "changing" class (with will-change: opacity) just before running the animation. But, it's recommended here that you use will-change sparingly. So, I removed it after the animation was finished but it disappeared again. So I added the z = 1 thing after removing the class and it works! Thanks so much for the suggestion! By the way, I only need this to work in Chromium because it's in an Electron app.
  6. I understand, Blake has told me more than once to beware the evil foreignObject but I need a way to allow users to type and wrap text in a container. At this point, text is the only thing I place in a foreignObject. All bitmap images go in a nested SVG containing an image element and vector images go in as a nested SVG.
  7. Now this is interesting. I forked the above pen and changed the fromTo opacity to go from 0 to 0.99 for the text instead of all the way to 1 and it remains visible. Weird. https://codepen.io/swampthang/pen/eYZNGGd?editors=1010
  8. Hey, @PointC, thanks for the reply. I want to be able to layer text, images and video all within a master SVG. I built a video creation app that already does all of this except allow stacking for video elements in the same stack as other elements. Currently, the user can add videos but all the videos are behind everything else, iow, they're not located inside the main SVG. I use that main SVG to build a PNG sequence and then stack all the animations on top of the videos. I'm looking for a way to allow videos to live among the other elements so a user can place them anywhere in the stack they want. I'm just experimenting with it right now but hoping there's a way to do this and leverage GSAP as the animation engine.
  9. Wondering if anyone has any experience with embedding video in an svg in combination with text. In the codepen, clicking the play button (in Chrome) reveals the text fading in (opacity) but it promptly disappears. I read here that you can embed video elements in a container such as a g, etc., but I could never get the video to display that way. I'm using foreignObjects but the video seems to "cover" the text unless it's animating. Any idea what might be going on?
  10. swampthang

    liveSnap

    Thanks, @ZachSaucier. My problem was that I didn't understand all the in's and out's of liveSnap. Your point is well taken regarding using type: 'x' rather than type: 'left' Below is what I ended up with. It works the way I was wanting it to. And sorry for not being more clear. I wanted either end of a block to snap to either end of the other 2 allowing you to move them around which would change the snap points. https://codepen.io/swampthang/pen/zYrVLao?editors=1010
  11. swampthang

    liveSnap

    I think I finally figure it out. Here's codepen 3: https://codepen.io/swampthang/pen/pogXKGq?editors=1010
  12. swampthang

    liveSnap

    Ok, I'm an idiot. I need to be able to drag any of the 3 blocks to the left or right and snap them to the edges of the other 2 blocks but only when they get within a few pixels of the other blocks. I created another version of it where I'm setting liveSnap to an array. Is there a way to return an array AND use a radius value? Ok, now I'm seeing that using an array is a 1-time setup. So, once you've dragged one of the blocks, snap is still using the old array. Here's the other pen: https://codepen.io/swampthang/pen/XWXLEvR
  13. swampthang

    liveSnap

    In Chrome, none of the blocks are draggable left/right.
  14. swampthang

    liveSnap

    I'm attempting to set up a simple Draggable with liveSnap. Want any of the green blocks to snap to either the left or right of the other 2 but none of the blocks are draggable. What am I doing wrong here? (I'm sorry about still using 2.x but I can't upgrade yet)
  15. @namisuki, some questions for you: Are you attempting to do this on a web server? Are you using or can you use node.js? If you are or can use node.js, can you install any node module on your server? I am using node.js in Electron and am using ffmpeg (using the node module fluent-ffmpeg) to build out videos. ffmpeg is lightening fast but you have to create a png sequence to hand it as an input. I have everything in a master SVG. You can add images using the <image/> element in an embedded SVG and text can be added in a foreignObject. I can point you in the right direction for this if you need help. As for creating an image sequence, here's an example of how to create the sequence from a GSAP timeline (older so not 3.0) https://codepen.io/swampthang/pen/yaYJpm With each iteration of the loop I use the node module base64-img to save each sequence as a transparent png in sequence like this: const base64Img = require('base64-img'); function processTimeline(obj) { let current = 0, thisImgData, thisFilePath, FPS = 30, saveDir = 'your/save-to/path/', masterTL = yourMasterTimeline, masterSVG = document.getElementById("stage-master"), // this is the id of the master SVG duration = masterTL.duration(), frames = Math.ceil(duration / 1 * FPS); TweenMax.ticker.fps(FPS); masterTL.progress(0); // set the progress of the masterTL to 0 // advance by 1 frame masterTL.progress(current++ / frames); makePNG(); function makePNG() { if( current == frames ) { masterTL.progress(1); // make sure the timeline is all the way to the end } let xml = new XMLSerializer().serializeToString(masterSVG); // get the base64 string thisImgData = buildSvgImageUrl(xml); // set the new filename let padded = pad(current,5); let filename = `img${padded}`; let thisFilePath = saveDir+filename+'.png'; loadImage(); function loadImage() { // now create the png file and save it to the save-to folder let canvas = document.createElement("canvas"); canvas.width = WD; canvas.height = HT; let ctx = canvas.getContext("2d"); let img = new Image(); img.crossOrigin = "Anonymous"; img.onload = function() { ctx.drawImage(img, 0, 0, WD, HT); let url = canvas.toDataURL("image/png"); // I'm using a node module by the name of base64-img (see entry at top) let fullPath = base64Img.imgSync(url, saveDir, filename); URL.revokeObjectURL(url); checkCurrentFrame(); }; img.src = thisImgData; img.onerror = function(error) { console.log(error); }; } function checkCurrentFrame() { if( current == frames ) { createAnimationVideo(obj); // whatever your video creation script is } else { // advance the current frame masterTL.progress(current++ / frames); makePNG(); } } } function buildSvgImageUrl(svg) { let b64 = window.btoa(unescape(encodeURIComponent(svg))); return "data:image/svg+xml;base64," + b64; } function pad(str, max) { str = str.toString(); return str.length < max ? pad("0" + str, max) : str; } Adding the png sequence as a single input is as easy as: proc = new ffmpeg(); proc.addInput(`${group.dir}img%05d.png`) .addInputOption('-thread_queue_size 512') .addInputOption(`-r ${obj.FPS}`) .addInputOption(`-f image2`); You can look at the fluent-ffmpeg docs to see how to set up the ffmpeg process listeners like proc.on('start', function()). There's an on 'start', 'progress', 'error', and 'end'. Hope that helps. FFMPEG is an executable so it's not subject to the single thread limitation of javascript. fluent-ffmpeg is a node module that allows you to make calls to the executable from within a javascript thread.
×