Jump to content
Search Community

Search the Community

Showing results for tags 'javascript'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • GreenSock Forums
    • GSAP
    • Banner Animation
    • Jobs & Freelance
  • Flash / ActionScript Archive
    • GSAP (Flash)
    • Loading (Flash)
    • TransformManager (Flash)

Product Groups

  • Club GreenSock
  • TransformManager
  • Supercharge

Categories

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Personal Website


Twitter


CodePen


Company Website


Location


Interests

Found 296 results

  1. im try to fire timeline when a li it's hover, so i create a var with my timeline, and with event.delegateTarget to find children and not all with .CLASS this is my code $(".gb-help-list ").on('mouseover', 'li', function (e) { console.log('mouse'); var tl = new TimelineLite(); tl.to($(event.delegateTarget).find('.line-help'), 2, {width: '100px', ease: Power4.easeOut}) .to(".line-help", 2, {width: '100%', ease: Power4.easeOut}, "-=1"); }).on('mouseout', 'img', function (e) { TweenLite.to($(e.delegateTarget).find('span'), 1.5, {opacity:0}); }); I don't see any error in console, but this : o($(event.delegateTarget).find('.line-help'), 2, {width: '100px', ease: Power4.easeOut} doesn't fire.... anyone can helpme? Thx in advance
  2. Hi - this isn't a strictly GSAP related question, but you guys are wicked smaaaht, so I thought I'd ask here. (Incidentally, I'm using the fantastic Bannertime generator from @joe_midi, but this isn't related to that either). I have a object that's centered in the browser window using a combination of negative margins and absolute positioning. When resizing the browser window, the border flickers on the right and bottom edges in Safari only - it's fine in Chrome and Firefox. The structure of the object is as follows: <div class="banner"> <div class="border"></div> </div> The .border div is 2px smaller than its parent element, to account for the 1px border all around, and it's this border that flickers when the browser is resized. I have a demo of this in action here. Again, I see this behavior in Safari only. I would love any suggestions as to how I can prevent this from happening. Thanks, guys!
  3. I'm using the Greenscok Draggable as a themostat. The temperature increases or decrease at every 18 degrees rotation (also based on clockwise or counter-clockwise). I'm grabbing the rotation and using the remainder operator (%) and if it is equal to 0, the temperature changes. With the dial at 0 (rotation), the temperature is at 18 and at -180 (rotation), the dial should reach 26 degrees. However, the rotation and the temperature don't seem to be working in unison. Just to add, the temperature should be at 22 degrees at -90 (rotation). Here is the codepen: http://codepen.io/jetchy/pen/EgAdgP Thanks.
  4. Hello, I'm uploading my published Adobe Animate creative to Google Adwords, I've never done this before. I have in my zip file the image, the javascript, and the HTML so everything should be good to go. However when I upload and preview it, it doesn't animate. It's just the static background image. Am I doing something incorrectly with my code? Should I be doing something different with my publishing settings in Animate? Has anyone else encountered this when trying to put an animated ad onto Adwords? Thanks in advance for your help.
  5. Note: This page was created for GSAP version 2. We have since released GSAP 3 with many improvements. While it is backward compatible with most GSAP 2 features, some parts may need to be updated to work properly. Please see the GSAP 3 release notes for details. Chrome 53 debuted a new "feature" to improve animation performance and graphics fidelity, but it had some nasty side effects that caused quite a few animators to get unpleasant phone calls from angry clients whose ads and web sites suddenly looked blurry and/or stuttery. Every other browser (including previous versions of Chrome) render the same animation beautifully. Chrome's new behavior may also result in WORSE animation performance. A lengthy discussion with the Chrome team revealed some disturbing tradeoffs that animators need to know about, and that could spell trouble with other browsers too unless we band together as a community and make our voices heard. At the heart of the controversy is the will-change CSS property. What is "will-change"? It gives developers a way to say "hey, browser, I'm gonna animate this property, so please do whatever you can to prepare and make it happen smoothly" which often means creating a compositor layer to get GPU-acceleration of transforms and opacity. Think of a compositor layer like a screen-shot of the element that the browser can store on the GPU to move/scale/rotate cheaply instead of re-computing all the pixels on each screen refresh. Read Sara Soueidan's excellent in-depth article here for details. Problem: blurry, stuttering animations What is Chrome 53 doing differently? Chrome is basically saying "In the past, I intelligently managed when and how to rasterize elements, but now that will-change property exists, I'll just make that serve as a blind toggle switch for rasterization instead. If things look blurry, it's not my problem - the developer will need to jump through some hoops (described below) to make things look sharp again." It all boils down to how and when "rasterization" of an element occurs (changing it into pixels stored on the GPU). If rasterization happens when the element is very small, it will be lower resolution. When scaled up, it'll look blurry/pixelated. On the other hand, if you rasterize while it's at its native scale (1) or above, you'll get a much higher-quality image with more pixels. Another key factor is how it is rasterized. Apparently Chrome uses a completely different algorithm for rasterizing an <img> than a <div> with a background-image even though both use identical source files and are sized the same! Here's an example of how they look in Chrome 53: Update: The Chrome team says they've fixed the bug that caused background-image to render differently than <img> (issue 649046) and it should be in the 9/29 release of Chrome Canary. Both factors (the when and the how) are at play in Chrome 53's new behavior (which apparently was a purposeful engineering choice based on will-change). According to this document, all content now gets re-rasterized when its scale changes (which happens up to 60 times per second during animation of scale/scaleX/scaleY properties). That's supposed to keep things sharp, but in this case Chrome's background-image algorithm applies some sort of pixel-snapping which causes that odd vibration during animation. (Update: should be fixed soon). This re-rasterization comes at a cost performance-wise too. Previously, Chrome applied some heuristics to sense when it was appropriate to re-rasterize to avoid blurriness, thus it only kicked in when necessary. But now, you must opt-in to get the layerizing benefits by setting will-change: transform. The Chrome team says this is an "improvement" because it puts the control into the hands of developers, but clearly this shift in behavior comes at quite a price. Overnight, the rug got pulled out from under many animations around the web, hurting performance (due to the constant re-rasterizing by default) and also introducing those visual vibrations when scaling background-image graphics (quite common for sprite sheets). To be clear, this primarily affected scaling animations, not ALL animations on the web. And of course anything where a background-image was used and the element was layerized (making it blurry in Chrome 53). Partial solution: set will-change: transform Complaints rolled in quickly, and the Chrome team suggested that developers go back and edit all their affected animations by adding will-change: transform which would layerize/rasterize those elements (skipping re-rasterizing on every refresh). It's a bit of a nightmare to go back and find all the affected animations and make the necessary edits, but hey, it's just adding one property to the CSS so it shouldn't be too bad, right? Oops, that breaks it in other ways Chrome chose to implement will-change such that it will trigger rasterization at whatever the current scale happens to be, so if you've got an element that starts at scale(0.1) and animates up to scale(1), rasterization happens at scale(0.1), thus it will look terrible (blurry/pixelated) at the end of the animation. Here's an example showing the SAME image animating to identical scales, but flip-flopped starting/ending values: (View the codepen here) Partial solution: toggle will-change back and forth Hold your nose...here comes the hack. In order to trigger re-rasterization of the element to keep it clear, the Chrome team suggested toggling will-change back to auto during the animation, then waiting until a requestAnimationFrame elapses before setting it back to transform...and then doing it again, and again, at whatever frequency the developer wants in order to keep things acceptably sharp. So will-change: auto is being pressed into service to explicitly tell the browser "rasterize me on the next requestAnimationFrame." Yes, you read that correctly: animators must turn **off** the very property whose entire purpose is to be **on** for animation, signaling change. Then toggle it back-and-forth many times during the animation. So we're essentially telling the browser "I'm gonna change this...no I'm not...yes I am...nope..." all while in the process of animating. This doesn't exactly sound consistent with the intent of will-change, nor does it seem performant (Google's document says "Be aware, however, that there is often a large one-time performance cost to adding or removing will-change: transform.") Gotcha: never de-layerize Even if you're willing to follow the advice to set will-change: transform and toggle back-and-forth during the animation to maintain a reasonable level of clarity, there's one last gotcha - if you set it back to will-change: auto at the end of the animation and give it a non-3d transform (to de-layerize it), you'll see a jarring shift in pixels and clarity for anything with a background-image! The Chrome team advises in that case that you make sure it always has a 3d transform thereafter to prevent it from de-layerizing. Following that advice puts you at risk of running out of memory (or hitting performance problems), plus the background-image will always be slightly blurry. Here's what it looks like to toggle between the two modes at the end of the animation: Update: The Chrome team clarified that this was a temporary fix, not a long-term solution. The background-image rendering bug should be resolved soon, and this "never de-layerize" suggestion will no longer apply at that point. The bigger issue, beyond Chrome... The will-change spec doesn't really specify implementation details which means that Chrome's new behavior may be completely unique; Firefox might do something different, and then there's Edge, Safari, Opera, Android, etc. Perhaps Chrome requires that developers toggle back-and-forth to maintain clarity, but what if Firefox interprets that differently, or imposes a big performance penalty when doing the same thing? What if developers must resort to various [potentially conflicting] hacks for each browser, bloating their code and causing all sorts of headaches. We may have to resort to user agent sniffing again (did you just throw up a little in your mouth?). This will-change property that was intended to SOLVE problems for animators may end up doing the opposite. It seems wise for the browsers to step back and let the spec authors fill in the implementation details and gain consensus before moving forward. Another problem: stacking contexts As mentioned in this article, will-change can also affect the stacking context of elements, leading to unintended changes in how things render/stack on your page. So your content may stack differently in browsers that do support will-change than those that don't. More sniffing, yay! To summarize: Before Chrome 53 Just animate stuff. No need to jump through any hoops as a developer to get decent clarity (though the Chrome team points out that there was still some blurriness in certain edge cases that they've heard complaints about). After Chrome 53 Make sure to set will-change: transform if you're animating transforms and want to opt-in to performance optimizations and avoid jittery background-image scaling. But be careful about how it might affect stacking contexts, and keep checking when other browsers decide to implement will-change, as it could change how your content looks. If you're scaling up, make sure you toggle will-change back-and-forth to/from auto during the animation to maintain clarity (but sacrifice performance). Make sure there's a 3D transform applied throughout to prevent de-layerization (which would cause a big performance hit). Don't switch back to a 2D transform at the end (at least for elements with a background-image), or you'll see a jarring pixel shift. (Update: should be fixed soon in Chrome, so this step won't be necessary at some point.) Don't forget to go back and find/fix existing animations that are affected by the new Chrome behavior. Is Chrome going to fix this? As of today, the Chrome team says they've thought a lot about this and feel pretty strongly that the new behavior is an improvement, so there's no plan to change it (that we know of at least). Here are the solutions we proposed (with the answers we got): Instead of putting the burden on developers to manually toggle will-change back-and-forth between transform and auto during the animation, just have the browser natively sense when re-rasterization is prudent and do that automatically which would be much faster than JS anyway. It seems rather trivial for the browser to sense when an element has been scaled greater than a certain delta (like 0.2) and trigger a rasterization. Chrome team's answer (summary): "that's too hard (complex). The browser might get it wrong sometimes, so it's better to have developers do it at the JS level. And JS isn't that much slower than native. It's just a small amount of extra work for animators (or library developers)." The browser could always perform rasterization at native size (scale of 1) or the current scale, whichever is bigger. That way, nobody would run into those nasty blurred images when scaling up from a small value to 1 (pretty common) and there's no need to keep re-rasterizing. Chrome team's answer: "rastering at native size and then scaling that down with bilinear filters in the compositor to something less than 0.5 or so will start to loop noticeably bad." If the goal is to put control into developers' hands, why not expose an API for defining what scale rasterization should occur at, like element.cacheTransform = "scale(1)"? Chrome team's answer: "it may not be so easy to get it right in terms of expressiveness, and requires new APIs...and also to define what raster scale means, which seems quite tricky in general, especially while not over-fitting to current implementation strategies. That might happen later on." The browser should use the same algorithm to rasterize (and scale) anything. The one being used by Chrome for <img> looks great so please use that for background-image too Chrome team's answer (summary): "Acknowledged. We're working on a fix." Instead of turning will-change into a convoluted way to control rasterization in the browser and risk opening a can of worms with other browsers doing things completely differently due to vague specs, roll back the behavior and work with the spec authors to define implementation details and re-approach later when consensus is reached. Chrome team's answer: "the intention of will-change is to give a hint to the browser that the referenced property is going to be animated, and for the browser to take steps to optimize performance for that use case. This is why will-change: transform creates a composited layer: because animating transform afterward will therefore not later have the startup cost and per-frame of creating the composited layer and rasterizing it. Following this logic, further extending the meaning of will-change: transform to not re-raster on scale change is similar, because it will make it faster." Can GSAP fix it for me? Yes and no. We've already experimented with the suggestions that the Chrome team made, but there are a few tricky challenges. First, we're hyper-focused on performance so it's quite painful to have this new Chrome behavior force us to add extra logic that must run on every tick of every tween of any transform-related animation. It probably wouldn't be noticeable unless you're animating hundreds or thousands of elements simultaneously, but we built GSAP to handle crazy amounts of stress because sometimes that's what a project requires, so we're pretty frustrated by Chrome's decision to impose this burden on animators and libraries like GSAP. But yes, we could do the toggling under the hood automatically and accept the performance tradeoff. But another major problem that's totally in the hands of browsers is rendering - we can't fix that jarring pixel shift at the end of the animations of background-image. We can force the 3D transform to remain, but as described above, that leaves things blurry and unnecessarily eats up memory. We work very hard to implement workarounds for browser inconsistencies and bugs like this, but we can't work miracles. We really need Chrome to step up and provide some better solutions. Conclusion There's no doubt that the Chrome team is working hard to move the web forward and deliver the best experience for their users. At GreenSock, Chrome is our primary browser that we use every day, so we're big fans. This article isn't intended to criticize anyone, but rather to bring attention to something that could spell big trouble for animators in the days ahead, beyond the headaches Chrome 53 caused with its new behavior. Hopefully Chrome will roll back the changes and/or implement some of the suggestions above. We'd encourage folks to make their voices heard (on the Chrome thread, below in the comments, with the spec authors, etc.). Perhaps we got something wrong - feel free to correct us or make other suggestions. Ultimately we want to help move animation forward on the web, so please join us.
  6. Hey guys please please help to understand THIS on javascript. I am trying to use THIS to select the same selector from parent function. Here's example: function Animate() { console.log("Clicked!"); TweenMax.to("#thing", 1, {onStart: a}); } function a() { console.log("Yeah!"); TweenMax.to(this, 1, {y: "100px"}); } I know this is clever but I want to know how to select the same selector from parent function. Same as THIS.
  7. Hi Guys, I've been lurking on this forum for a while and have hit finally got to the point where I need to ask some questions. Firstly I'd like to say Hi and I really like what I've seen on this forum, I've been a full stack developer for 16 years but the banner world is fairly new to me (I did help out with some flash banners way back). The situation I'm in is I'm a tech lead, the previous front end dev used to look after the banners and the process was very adhoc. I have a freelancer with no banner experience so I'm trying to hold his hand through the process even though I don't know much more than him. I'm trying to put some processes and standards in place for banner development. For the campaign that I'm working on currently, I've tried out this Yeoman generator - BuildABanner , using the DoubleClick template. I like what I see, it's generating a nice folder structure, with nice gulp tasks and best practices (polite loading content etc). Using the DoubleClick template it makes proper use of the enabler events and it's triggering an Exit event as the clicktag. I can see in the console when I click on it that the enabler is receiving the event. Example; bgExit = document.getElementById('background_exit_ad'); //Add Event Listeners for DoubleClick addListeners = function() { bgExit.addEventListener('click', bgExitHandler, false); } bgExitHandler = function(e) { //Call Exits Enabler.exit('HTML5_Background_Clickthrough'); } I've been tasked with developing some banners built to 'doubleclick' spec but will be served on two different networks/suppliers (Cadreon and Ansible). I'm still struggling to grasp the difference between the platforms and networks etc. So I thought I'd do a test before sending it off to the media agency (so I don't look like an idiot). I tried out this DoubleClick Campaign Manager HTML5 Validator. When I validate my banner (with the above click tag) it passes all the rules except for the click tag and says that it's not present. I thought that the enabler.exit event was the best practice way to do this? When I look back at some previous banners developed by my old front end dev, he had simply referenced the enabler.js, but implemented the click tag this way, marked up in the comments as "Standard ClickTag"; Example; <html> <head> <meta name="ad.size" content="height=0,width=0"> <script src="https://s0.2mdn.net/ads/studio/Enabler.js"></script> <script> var clickTag = "http://www.google.org"; </script> </head> <body> <a href="javascript:window.open(window.clickTag)">blah</a> </body> </html> Adding this into my banner along side the exit event would be doubling up the events so as I see it - I should be using one or the other? Is this tool just stupid? I don't mind simplifying the code and doing it the generic way but it makes more sense to me as a dev to use the proper events within enabler and it would also allow me to keep the buildabanner generated code as untouched as possible. Also does anyone have any feedback on the buildabanner system itself?? Does anyone use it? Or do you use a different yeoman tool? Very keen to hear some real world feedback on this. Thanks Marty
  8. Hi guys, I have these timelines that's very similar and I'm wondering if it's possible to loop through them or something to have a cleaner code? This is what I'm working with: var rt_step1 = $('.contact_module2016 .step1'), rt_step2 = $('.contact_module2016 .step2'), rt_step3 = $('.contact_module2016 .step3'), rt_step4 = $('.contact_module2016 .step4'), rt_step5 = $('.contact_module2016 .step5'), step1Timeline = new TimelineMax(), step2Timeline = new TimelineMax(), step3Timeline = new TimelineMax(), step4Timeline = new TimelineMax(), numbers = ["step1Timeline", "step2Timeline", "step3Timeline", "step4Timeline"]; step1Timeline.to(rt_step1, 0.6, {autoAlpha: 0, display:'none', ease: Expo.easeInOut, y:-30}) .to(rt_step2, 0.6, {autoAlpha: 1, display:'block', ease: Expo.easeInOut, y:0}); step1Timeline.pause(); step2Timeline.to(rt_step2, 0.6, {autoAlpha: 0, display:'none', ease: Expo.easeInOut, y:-30}) .to(rt_step3, 0.6, {autoAlpha: 1, display:'block', ease: Expo.easeInOut, y:0}); step2Timeline.pause(); step3Timeline.to(rt_step3, 0.6, {autoAlpha: 0, display:'none', ease: Expo.easeInOut, y:-30}) .to(rt_step4, 0.6, {autoAlpha: 1, display:'block', ease: Expo.easeInOut, y:0}); step3Timeline.pause(); step4Timeline.to(rt_step4, 0.6, {autoAlpha: 0, display:'none', ease: Expo.easeInOut, y:-30}) .to(rt_step5, 0.6, {autoAlpha: 1, display:'block', ease: Expo.easeInOut, y:0}); step4Timeline.pause(); This is what I'm trying to do but I get the error that numbers is not a function. Anyone done this before? for (i = 0; i < numbers.length; i++) { numbers[i].to(rt_step1, 0.6, {autoAlpha: 0, display:'none', ease: Expo.easeInOut, y:-30}) .to(rt_step2, 0.6, {autoAlpha: 1, display:'block', ease: Expo.easeInOut, y:0}); numbers[i].pause(); } Thanks so much for your attention and all the help
  9. HI! I want to convert timeline lite elements with their animations into canvas animation, I mean the whole dom element and its children that contains timeline lite animations. an animation like this one http://codepen.io/ihatetomatoes/pen/QboVVV is there way to do that? I've found some libraries that only take a screen shot of the page but no animations.
  10. Dear All, i have seen this tutorial where the animation can be controled by the input value of a slider. http://greensock.com/sequence-video I have a variable definde in my code which has a value between 0 and 1. Is it possible to control the progress for the animation with the value from this variable?
  11. First of all, you can find a simplified demo of my code in this JSFiddle. I found that my problem happens the way I describe it in Google Chrome, so if you plan to try and fix the bug, please use that browser. I apologize if the code is not very well simplified; please consider that this is a snippet from a bigger project. I'm working on a webapp that uses JQuery and GreenSock's TweenLite for animations. This app consists on some menus that control everything, that are transitioned between using the bodyChange() function. This function has two parameters: nextOrPrev, that runs one animation or another based on the value provided ("next" or "prev"). Only the "next" animation is done yet, but that is not important for now. The "prev"animation, not yet used, just emits an alert("prev"). bodyFunction. The function provided will fill the body with the elements necessary for that menu, and the wrap them in a #bodyWrap. In the demo I provide you with there are only two menus: The first one, mainMenu, with only a #playButton. When you click it, the bodyChange() function is called with the following parameters: ("next", playSettingsBody), playSettings being the second menu. This is the problem: when you click the playButton, the button goes up a on the screen and then executes the TweenLite animation. I can't see, however, why does the button "jump up", instead of staying in the same place and execute the animation. This is probably due to a small mistake. What is it? Thanks for any help.
  12. GreenSock

    GSAP 1.19.0 Released

    Note: This page was created for GSAP version 2. We have since released GSAP 3 with many improvements. While it is backward compatible with most GSAP 2 features, some parts may need to be updated to work properly. Please see the GSAP 3 release notes for details. GSAP version 1.19.0 introduces some exciting new features for advanced users as well as conveniences for everyone (even the "greenest" novices). The most noteworthy improvements are summarized below: Function-based values Instead of a number (x:100) or string (width:"300px") or relative value (y:"+=50"), you can now define most values as a function that'll get called once for each target the first time the tween renders, and whatever is returned by that function will be used as the value. This can be very useful for randomizing things or applying conditional logic. See it in action in the demos below. See the Pen BzmGba by GreenSock (@GreenSock) on CodePen. ...and more GSAP 1.19.0 is more ES6-friendly (for example, you can npm install gsap and then import {TweenLite, Elastic, TimelineMax} from "gsap" in your project). Plenty of bug fixes too. See the whole list in the github changelog. DOWNLOAD GSAP TODAY Happy tweening!
  13. I know this is a bit off topic but i wanted to know is there anyway to do this? I'm creating a animation editor and i used GSAP for tweens and timeline,and I want users to be able to export animation as video and download them.
  14. Can you use GSAP's timeline to execute a javascript function @ certain time?
  15. I created the following animation using css/javascript, but was told that it would be easier to achieve what I need done with Greensock. http://codepen.io/gtdesign/pen/JKGmeO I would like help in seeing if there is a way to hover over one of the circles, say for instance “Gutters” that the rotation stops and the “Gutters” circle gets larger and changes color. Can all of this be done using GSAP? Thanks for any help, Andy
  16. Hi All, I am new to GSAP and inexperienced with javascript so please bear with me. I am currently trying to make a Go To Top button that follows you down the page and when you click the button it brings you back to the top of the page and so far it has been working.... Unless you want to use it more than once. For some reason, I can't get it to play again after the first click. I have been using timeLineMax and scrollTo to achieve this so far. I have been trying something like this to fix it but no luck so if you can please point me in the right direction. // GO TO TOP BUTTON var goToTopTi = new TimelineMax({paused:true}); goToTopTi.to(window, 1, {scrollTo:{y:0}, ease:Power2.easeOut}); var isPressedGoToTop = 0; document.querySelector("#goToTopButton").addEventListener("click", function(){ if (isPressedGoToTop === 0) { goToTopTi.play().timeScale(1); isPressedGoToTop++; } else { goToTopTi.repeat().timeScale(1); } }); I also tried something similar with no "if" and just goToTopTi.play().timeScale(1); thinking it would simply play each time and no luck with that either. Thanks in advance for all your help, Tech Soul P.S. I am not using jquery on this site for speed reasons in case your thinking the answer is a jquery thing
  17. Hello, I have a problem with Adobe Animate TimelineMax. The problem is that I need to navigate back and forward into the Animate timeline and play the same TimelineMax animations that afect to the same element in and out. in the first frame of animate keyframe I have one button and this: var root = this; root.stop(); root.chk1.visible = 0; root.continuar.visible = 0; function sigue0() { root.play(); } function entrar(vel) { root.boton1.disabled = true; tl0 = new TimelineMax({onComplete:sigue0}); tl0.from(root.boton1,vel,{x:root.boton1.x-150, alpha:0, ease:Back.easeOut},.0); } entrar(1); in the next frame I have this: var root = this; root.stop(); if (this.control == 1) { root.chk1.visible = 1; root.continuar.visible = 1; } function sigue1() { tl1.pause(0).clear(); root.play(); } this.boton1.addEventListener("click", fl_MouseClickHandler1.bind(this)); function fl_MouseClickHandler1() { this.control=1; root.chk1.visible = 0; tl1 = new TimelineMax({onComplete:sigue1}); tl1.to(root.boton1,1,{scaleY:2, scaleX:2,alpha:0},.0); } in the next yhis: var root = this; root.stop(); if (this.control == 1) { root.gotoAndPlay("op1"); } and in the label "op1" I have this: var root = this; root.stop(); this.volver.addEventListener("click", fl_MouseClickHandler1.bind(this)); function fl_MouseClickHandler1() { this.control=1; this.gotoAndPlay(0); } The problem is that the fisrt time I execute this the function "fl_MouseClickHandler1()" wait until every animation is finished and then execute "onComplete" and function "sigue1" but the second time the function "fl_MouseClickHandler1()" is executed it don't wait the animation of the elements and execute "onComplete". How can I fix that? I create elearning courses in Flash the last 10 years and now I want to do the same with animate but i need to navigate back and forward a lot of times in the Animate timeline. Sorry for my english and best regards, Eduardo
  18. Hi, I am a coding rookie. I am stuck in the positioning. How can I always move the items towards center whatever the screen size is? http://codepen.io/wenjingfei/pen/RRNYeY Thank you!
  19. Hey guys, I have a dynamic template, where you can input css into the XML feed, and the banner will read that on run time and adjust the banner overwrite the styles using set: TweenMax.set(element, newValuesObject); And I've managed to cobble together bits of string replaces and RegEx: var myDynamicCSS = "color:#ffffff; background-color: #00ffbb; text-align: left; border: 1px solid green"; String.prototype.toCamelCase = function() { return this.replace(/^([A-Z])|[\s-](\w)/g, function(match, p1, p2, offset) { if (p2) return p2.toUpperCase(); return p1.toLowerCase(); }); }; const convertCSS2JS = (css) => { let frameCSS = css.replace(/([\w-.]+)\s*[^;]+);?/g, '$1:$2,'); frameCSS = frameCSS.replace(/,+$/, ''); let properties = frameCSS.split(', '); let frameCSSObj = {}; properties.forEach(function(property) { let cssProp = property.split(':'); let cssKey = cssProp[0].toCamelCase(); let cssValue = cssProp[1].trim(); frameCSSObj[cssKey] = cssValue; }); return frameCSSObj }; TweenMax.set(myDynamicElement, convertCSS2JS(myDynamicCSS])); But I'm unsure of how easily breakable this is? Codepen: http://codepen.io/joemidi/pen/GZVedX
  20. Hi, I was wondering if I can call directionalRotation plugin directly in Javascript. Something like: var shortAngle = TweenMax.directionalRotation("420_short"); > shortAngle = 60 I just need the value returned by the plugin, I'm not using it in a Tween or something. Is that possible?
  21. Hey guys! I am working up a new animated banner for my boss's site. I have 4 banners. Each one animated in to view and animated out of view. I just finished all their animations. I commented out the js and html for each individual banner while I worked on the animation. They are strung together as once leangthy animation due to my lack of knowledge with JS and Gsap but think it should work okay. However when I removed all my commented out JS and HTML to string all the animations together to play in "sequence" Every animation except the first example will lag into place and not show the whole animation. I am not really equipt to figure this out. I have no idea why it would act this way. The animations play smoothly when not linked together. You can see an example preview of my project here: http://portalpacific.net/Al/PennyGrab2.1/ And I have also included a zip file here for anyone willing to take a look at help me. http://portalpacific.net/Al/PennyGrab2.1/PennyGrab2.1.zip Thanks so much for your time! Any help at this point is beyond appreciated!
  22. Hi First of all, I'm a total JS noob so apologies if this is a really simple question. I'm trying to animate burning embers, based on this confetti codepen: http://codepen.io/MAW/pen/wBGvgW But it's reliant on jQuery - I need a jQuery-less solution, using only TweenMax, is this possible? Trying to change the code myself (http://codepen.io/htwinam/pen/EPOryg) just throws up endless errors and doesn't work so would be massively grateful for any help Thanks
  23. Have you ever wondered why GreenSock doesn't use a more common open source license like MIT? Sometimes our licensing model is misunderstood but hopefully after reading this article you'll see why it is so foundational to the success of our tools and why it ultimately benefits users too. Open source is awesome! If you had to write and maintain your own jQuery or React or GSAP, could you do it? Would it be as refined and reliable? Probably not. How many times have you found a library or chunk of code on github that saved you hours of development time? Open source can be a quick path to great results. It also serves as a jumping-off point for up-and-coming developers to share their innovation, inspiring others to write better code. Open source is easily one of the most influential movements in the entire development community. We're certainly not anti-open-source. Quite the contrary. Open source is *not* always awesome Despite its strengths, there are some dark sides to open source. Many projects are riddled with bugs, poorly documented, and sometimes even dangerous to use. The web is littered with abandoned projects that once seemed promising. It's like a clearance bin you'd find at a discount store; there may be some treasures in there, but you'll have to dig through a lot of undesirables to find the gems. In an industry that's inundated weekly with "hot new" libraries, all just a click away for free, it can be tough to figure out which ones to gamble on. Commitment required(?) Some libraries are one-trick ponies that don't require ongoing commitment. For example, a math library that performs matrix operations or a formatting library. If the author abandons the project, it's no big deal. Other projects, like a full-featured animation library, are much more complex and leverage browser-level optimizations that are moving targets. Vendor-specific bugs need to be worked around or new browser features get introduced that beg to be animated. Plus, animation is the most performance-sensitive aspect of the user experience, so a commitment to ongoing optimization pays dividends. Beyond the code itself, what about community, documentation and examples? These become increasingly important as a library's feature set expands and companies standardize on it, needing to train new staff. Again, some projects have very little need for ongoing support but for GreenSock, it seemed essential. Hampered by success Paradoxically, success is the very thing that kills many open source projects because they don't have a funding mechanism to underwrite all the demands. The project that was once the twinkle in the author's eyes often ends up being a thorn in their side. They can't afford (or don't really want) to keep up with the demands. That's not to say that all open source projects suffer this fate. We have the utmost respect for open source authors, and we don't mean to diminish anyone's hard work or generosity. But there are some common frailties of open source projects in general that GreenSock aims to avoid. Having built and maintained a popular library for over a decade, we've learned that one of the most important factors in keeping a complex project like GSAP vibrant is the licensing model. Our goal was to make GreenSock tools not only accessible (which MIT does a great job of), but also sustainable, business-friendly, consistent, and respectful. We needed a license that would have the best chance of facilitating those goals, as described below: Sustainable Scores of animation libraries have come and gone over the years. We didn't want to be just another flash in the pan that ultimately leaves users feeling abandoned. Trust is paramount for us. We wanted the GreenSock brand to be associated with an exceptionally high level of commitment and reliability. Earning that trust requires a consistent, full-time effort so we considered these funding models: Self-funded - most open source projects are self-funded, meaning that the authors cover all the costs themselves, typically by donating their time and resources. It's noble and perfectly adequate for many projects. An MIT (or similar) license is a great fit because it allows an author to share code with almost zero strings attached. However, the commitment level tends to be unreliable and self-funded projects are rarely sustained long-term. With our particular project and goals, this wasn't a good fit. Corporate sponsorship - sometimes huge companies like Adobe, Microsoft, Facebook, or Google are willing to contribute funds to encourage a project. They usually have a vested interest in its success. For example, Adobe helps fund CreateJS which is a set of libraries that its very own Adobe Animate (formerly Flash) relies on. React is underwritten by Facebook. This can be a great way to offload development costs onto some deep-pocketed corporations, keeping it totally free for end users. The risk, however, is that if the project goals don't line up with the corporation's agenda (which can shift), funding gets yanked or the developers get pressured to go in a direction that may not be good for end users. If the company has a bad year, they may re-allocate resources. The project is ultimately beholden to a few key sponsors with potentially conflicting objectives. This wasn't something we were comfortable with although we acknowledge that it's a great solution for certain projects. Private investors - famo.us is probably the most well-known example of an "open source" project that took on private investors. A few years and $30,000,000+ later, it stopped development of its open source library and laid off a large chunk of its staff, deciding to "pivot" in a more profitable commercial direction. Private investors want a return on their investment (which is completely understandable) but an MIT license can make that very difficult because it leaves the IP (intellectual property) wide open for anyone to steal, including competitors. Similar to corporate sponsorship, projects who get their funding this way are beholden to a few key stakeholders who may have a very different agenda than end users. Again, this wasn't an ideal fit for GreenSock. User-funded - when funding comes directly from end users, it creates a wider base (more stable) and incentivizes the kind of innovation that end users actually care about, otherwise funding naturally dries up. That's a healthy dynamic for aligning agendas. Rather than serving a small set of outside investors looking for a return or a corporation with profit centers to bolster, end users play the central role. The downside, of course, is that certain features or use cases involve a fee. That can be a tough pill to swallow for some, especially when "free" MIT-licensed options abound. And it takes time to build up a wide base of users who literally "buy in". Trust and longevity are key. But a surprising number of users embrace this model because it allows them to "vote" with their dollars and participate in making a product they love continue to thrive. While this model certainly isn't for everyone, it has been a good fit for GreenSock. Accessible Sometimes we hear comments like "you can't use GreenSock in commercial projects without a license." WRONG. [loud buzzer noise]. Our license was designed to make the tools extremely accessible, even permitting usage in one-off commercial projects (where you get paid a one-time fee by a single client). And of course it's free to use in non-commercial projects (see licensing page for details). That way, a certain breed of power-user provides the funding that benefits everyone. An estimated 95%+ of our users never pay us a dime. So while a rare type of commercial project does require a "Business Green" Club GreenSock membership (which covers an unlimited number of projects while active), the vast majority of users never need that special license. This accessibility was a cornerstone of our approach. We didn't want to hide all the tools behind an intimidating paywall or make all commercial uses trigger a fee. Yet it couldn't be as unrestrained as MIT because that would create vulnerabilities for us and our users (as described in this article). Another way we keep the core tools accessible is by freely exposing the raw source code both on github and in our downloads. Users can peek under the hood and see how things work. That makes troubleshooting and learning much easier. We tried to strike a balance of openness and healthy insulation from the frailties of MIT. Business-friendly Most businesses are very concerned about IP infringement, lawsuits, and indemnification. The "no strings attached and no warranties" nature of MIT is both a strength and a weakness. It simplifies sharing, but what if an open source library leverages someone else's IP? What if it uses "copyleft" code that infects anything it touches, requiring that all projects using it be open-sourced as well? That could be a huge problem for businesses with proprietary tools in a competitive market. GreenSock's license doesn't have any attribution requirements, nor does it impose share-alike rules like GNU and Creative Commons. Furthermore, it contains warranties that aren't found in MIT-like licenses, making GreenSock more business-friendly. It survived the software audits and legal review process at reputable companies like Google, Sony, EA, Intel, every major ad network, and many others. This vetting is necessary in the business world where there is so much at stake. The license also allows for code edits to be made for bug fixes. Typically that's not necessary because we handle it as a part of our ongoing support efforts, but businesses appreciate knowing that if we relaxed those efforts, they'd still be able to get their project working if they ran into a bug and needed to squash it themselves. Consistent MIT-licensed projects spread on github where it's trivial to fork them and start making custom flavors. That's ideal for authors whose goal is to start something and then step back to let the community run with it. But that can lead to a lot of confusion as different flavors start popping up with incongruent feature sets and incompatibilities. Focus and direction are easily lost. If there's no driver (or too many drivers), it can be a scary ride for the passengers. We wanted the GreenSock brand to convey a certain level of consistency and reliability. That's part of the reason we don't generally accept pull requests - we keep a tight reign on the codebase so that we are intimately aware of every piece. That allows us to not only support it, but also stand behind our IP warranties. Respectful We believe that if we respect our users, they'll generally reciprocate that respect. Most people want to do the right thing. So we don't inject "phone home" code that reports usage or causes things to suddenly stop working when a membership expires. We don't force business customers register each user or enter serial numbers to activate seats. We don't limit installations or the number of projects that the license covers. We don't burn extra energy policing usage. We've never sued anyone. We funnel our energy into refining the tools, innovating, educating, and supporting our users. We put a lot of effort into creating a positive, respectful culture in the forums which boast over 50,000 members and 80,000 posts. We don't charge a dime for support there, and we don't shamelessly promote Club GreenSock memberships. Again, we trust that if we keep trying to provide value, people will notice and gladly sign up to support the efforts at some point. An unintended benefit of our licensing model has been that it naturally weeds out users who expect everything for free and don't recognize the substantial effort that goes into these projects. We're left with users who tend to be very respectful and trustworthy (exactly who we like to serve). Conclusion GreenSock isn't the typical open source project. Our licensing model reflects that. It offers a blend of sustainability, accessibility, business-friendliness, consistency, and respect that'd be very difficult (or impossible) to accomplish with an MIT-like license. It's not necessarily "better" (open source is fantastic, really) but in our particular case it aligned more closely with the project goals. In fact, many companies have chosen GreenSock because of the license, not in spite of it. They wanted to invest in a platform that had a certain level of commitment behind it that's rare in the open source world. After a decade, the experiment seems to be working and we have our supporters to thank for that. We've been humbled and inspired by Club GreenSock members throughout the years. It's a privilege to create these tools and serve the community. We're grateful for their partnership which makes it all possible.
  24. Hi, I'm a novice when it come to this stuff but I've recently set up a holding page for a friend of mine and she wanted a video on it. However I found out that a video is no good in iOS devices as it starts off paused, and you have to click to play it. Therefore I started browsing the net to find a solution, which seemed to be to turn the video into an image sequence. Thats when I found this example codepen http://codepen.io/jamiejefferson/pen/aJcGl So I used the Javascript and bits of the CSS to create my code http://codepen.io/anon/pen/pgrQZO . However as you can see it is very glitchy? I've been trying to iron out the flickering but with no luck. Any ideas how to smooth it all out, as I'm really stuck!? Many Thanks in advance for any help! http://greenwich-design.co.uk/clients/guiltydolls/guilty_dolls2/
  25. Hello, I am working with a banner that is 970x418. I am trying to create a parallax effect on y-scroll (drag). In order to do this, I assume that: 1) I need to create a "container div" with a "foreground div" and "background div," foreground on top of the background. 2) Set the container itself to scroll/drag and affect both div's together. 3) Change the speed the background div to be a little slower, in order to give the parallax illusion. So far, I haven't found anything that shows me how to work with dragging one item and using the info of that item to affect the positioning of another. I've seen sites that give code libraries like scrollMagic, but they are not easy when it comes to finding out how to do specific things, only what is shown in their examples. Please help.
×
×
  • Create New...