Jump to content
Search Community

Search the Community

Showing results for tags 'news'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • GreenSock Forums
    • GSAP
    • Banner Animation
    • Jobs & Freelance
  • Flash / ActionScript Archive
    • GSAP (Flash)
    • Loading (Flash)
    • TransformManager (Flash)

Product Groups

  • Club GreenSock
  • TransformManager
  • Supercharge

Categories

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Personal Website


Twitter


CodePen


Company Website


Location


Interests

Found 11 results

  1. Note: This page was created for GSAP version 2. We have since released GSAP 3 with many improvements. While it is backward compatible with most GSAP 2 features, some parts may need to be updated to work properly. Please see the GSAP 3 release notes for details. Chrome 53 debuted a new "feature" to improve animation performance and graphics fidelity, but it had some nasty side effects that caused quite a few animators to get unpleasant phone calls from angry clients whose ads and web sites suddenly looked blurry and/or stuttery. Every other browser (including previous versions of Chrome) render the same animation beautifully. Chrome's new behavior may also result in WORSE animation performance. A lengthy discussion with the Chrome team revealed some disturbing tradeoffs that animators need to know about, and that could spell trouble with other browsers too unless we band together as a community and make our voices heard. At the heart of the controversy is the will-change CSS property. What is "will-change"? It gives developers a way to say "hey, browser, I'm gonna animate this property, so please do whatever you can to prepare and make it happen smoothly" which often means creating a compositor layer to get GPU-acceleration of transforms and opacity. Think of a compositor layer like a screen-shot of the element that the browser can store on the GPU to move/scale/rotate cheaply instead of re-computing all the pixels on each screen refresh. Read Sara Soueidan's excellent in-depth article here for details. Problem: blurry, stuttering animations What is Chrome 53 doing differently? Chrome is basically saying "In the past, I intelligently managed when and how to rasterize elements, but now that will-change property exists, I'll just make that serve as a blind toggle switch for rasterization instead. If things look blurry, it's not my problem - the developer will need to jump through some hoops (described below) to make things look sharp again." It all boils down to how and when "rasterization" of an element occurs (changing it into pixels stored on the GPU). If rasterization happens when the element is very small, it will be lower resolution. When scaled up, it'll look blurry/pixelated. On the other hand, if you rasterize while it's at its native scale (1) or above, you'll get a much higher-quality image with more pixels. Another key factor is how it is rasterized. Apparently Chrome uses a completely different algorithm for rasterizing an <img> than a <div> with a background-image even though both use identical source files and are sized the same! Here's an example of how they look in Chrome 53: Update: The Chrome team says they've fixed the bug that caused background-image to render differently than <img> (issue 649046) and it should be in the 9/29 release of Chrome Canary. Both factors (the when and the how) are at play in Chrome 53's new behavior (which apparently was a purposeful engineering choice based on will-change). According to this document, all content now gets re-rasterized when its scale changes (which happens up to 60 times per second during animation of scale/scaleX/scaleY properties). That's supposed to keep things sharp, but in this case Chrome's background-image algorithm applies some sort of pixel-snapping which causes that odd vibration during animation. (Update: should be fixed soon). This re-rasterization comes at a cost performance-wise too. Previously, Chrome applied some heuristics to sense when it was appropriate to re-rasterize to avoid blurriness, thus it only kicked in when necessary. But now, you must opt-in to get the layerizing benefits by setting will-change: transform. The Chrome team says this is an "improvement" because it puts the control into the hands of developers, but clearly this shift in behavior comes at quite a price. Overnight, the rug got pulled out from under many animations around the web, hurting performance (due to the constant re-rasterizing by default) and also introducing those visual vibrations when scaling background-image graphics (quite common for sprite sheets). To be clear, this primarily affected scaling animations, not ALL animations on the web. And of course anything where a background-image was used and the element was layerized (making it blurry in Chrome 53). Partial solution: set will-change: transform Complaints rolled in quickly, and the Chrome team suggested that developers go back and edit all their affected animations by adding will-change: transform which would layerize/rasterize those elements (skipping re-rasterizing on every refresh). It's a bit of a nightmare to go back and find all the affected animations and make the necessary edits, but hey, it's just adding one property to the CSS so it shouldn't be too bad, right? Oops, that breaks it in other ways Chrome chose to implement will-change such that it will trigger rasterization at whatever the current scale happens to be, so if you've got an element that starts at scale(0.1) and animates up to scale(1), rasterization happens at scale(0.1), thus it will look terrible (blurry/pixelated) at the end of the animation. Here's an example showing the SAME image animating to identical scales, but flip-flopped starting/ending values: (View the codepen here) Partial solution: toggle will-change back and forth Hold your nose...here comes the hack. In order to trigger re-rasterization of the element to keep it clear, the Chrome team suggested toggling will-change back to auto during the animation, then waiting until a requestAnimationFrame elapses before setting it back to transform...and then doing it again, and again, at whatever frequency the developer wants in order to keep things acceptably sharp. So will-change: auto is being pressed into service to explicitly tell the browser "rasterize me on the next requestAnimationFrame." Yes, you read that correctly: animators must turn **off** the very property whose entire purpose is to be **on** for animation, signaling change. Then toggle it back-and-forth many times during the animation. So we're essentially telling the browser "I'm gonna change this...no I'm not...yes I am...nope..." all while in the process of animating. This doesn't exactly sound consistent with the intent of will-change, nor does it seem performant (Google's document says "Be aware, however, that there is often a large one-time performance cost to adding or removing will-change: transform.") Gotcha: never de-layerize Even if you're willing to follow the advice to set will-change: transform and toggle back-and-forth during the animation to maintain a reasonable level of clarity, there's one last gotcha - if you set it back to will-change: auto at the end of the animation and give it a non-3d transform (to de-layerize it), you'll see a jarring shift in pixels and clarity for anything with a background-image! The Chrome team advises in that case that you make sure it always has a 3d transform thereafter to prevent it from de-layerizing. Following that advice puts you at risk of running out of memory (or hitting performance problems), plus the background-image will always be slightly blurry. Here's what it looks like to toggle between the two modes at the end of the animation: Update: The Chrome team clarified that this was a temporary fix, not a long-term solution. The background-image rendering bug should be resolved soon, and this "never de-layerize" suggestion will no longer apply at that point. The bigger issue, beyond Chrome... The will-change spec doesn't really specify implementation details which means that Chrome's new behavior may be completely unique; Firefox might do something different, and then there's Edge, Safari, Opera, Android, etc. Perhaps Chrome requires that developers toggle back-and-forth to maintain clarity, but what if Firefox interprets that differently, or imposes a big performance penalty when doing the same thing? What if developers must resort to various [potentially conflicting] hacks for each browser, bloating their code and causing all sorts of headaches. We may have to resort to user agent sniffing again (did you just throw up a little in your mouth?). This will-change property that was intended to SOLVE problems for animators may end up doing the opposite. It seems wise for the browsers to step back and let the spec authors fill in the implementation details and gain consensus before moving forward. Another problem: stacking contexts As mentioned in this article, will-change can also affect the stacking context of elements, leading to unintended changes in how things render/stack on your page. So your content may stack differently in browsers that do support will-change than those that don't. More sniffing, yay! To summarize: Before Chrome 53 Just animate stuff. No need to jump through any hoops as a developer to get decent clarity (though the Chrome team points out that there was still some blurriness in certain edge cases that they've heard complaints about). After Chrome 53 Make sure to set will-change: transform if you're animating transforms and want to opt-in to performance optimizations and avoid jittery background-image scaling. But be careful about how it might affect stacking contexts, and keep checking when other browsers decide to implement will-change, as it could change how your content looks. If you're scaling up, make sure you toggle will-change back-and-forth to/from auto during the animation to maintain clarity (but sacrifice performance). Make sure there's a 3D transform applied throughout to prevent de-layerization (which would cause a big performance hit). Don't switch back to a 2D transform at the end (at least for elements with a background-image), or you'll see a jarring pixel shift. (Update: should be fixed soon in Chrome, so this step won't be necessary at some point.) Don't forget to go back and find/fix existing animations that are affected by the new Chrome behavior. Is Chrome going to fix this? As of today, the Chrome team says they've thought a lot about this and feel pretty strongly that the new behavior is an improvement, so there's no plan to change it (that we know of at least). Here are the solutions we proposed (with the answers we got): Instead of putting the burden on developers to manually toggle will-change back-and-forth between transform and auto during the animation, just have the browser natively sense when re-rasterization is prudent and do that automatically which would be much faster than JS anyway. It seems rather trivial for the browser to sense when an element has been scaled greater than a certain delta (like 0.2) and trigger a rasterization. Chrome team's answer (summary): "that's too hard (complex). The browser might get it wrong sometimes, so it's better to have developers do it at the JS level. And JS isn't that much slower than native. It's just a small amount of extra work for animators (or library developers)." The browser could always perform rasterization at native size (scale of 1) or the current scale, whichever is bigger. That way, nobody would run into those nasty blurred images when scaling up from a small value to 1 (pretty common) and there's no need to keep re-rasterizing. Chrome team's answer: "rastering at native size and then scaling that down with bilinear filters in the compositor to something less than 0.5 or so will start to loop noticeably bad." If the goal is to put control into developers' hands, why not expose an API for defining what scale rasterization should occur at, like element.cacheTransform = "scale(1)"? Chrome team's answer: "it may not be so easy to get it right in terms of expressiveness, and requires new APIs...and also to define what raster scale means, which seems quite tricky in general, especially while not over-fitting to current implementation strategies. That might happen later on." The browser should use the same algorithm to rasterize (and scale) anything. The one being used by Chrome for <img> looks great so please use that for background-image too Chrome team's answer (summary): "Acknowledged. We're working on a fix." Instead of turning will-change into a convoluted way to control rasterization in the browser and risk opening a can of worms with other browsers doing things completely differently due to vague specs, roll back the behavior and work with the spec authors to define implementation details and re-approach later when consensus is reached. Chrome team's answer: "the intention of will-change is to give a hint to the browser that the referenced property is going to be animated, and for the browser to take steps to optimize performance for that use case. This is why will-change: transform creates a composited layer: because animating transform afterward will therefore not later have the startup cost and per-frame of creating the composited layer and rasterizing it. Following this logic, further extending the meaning of will-change: transform to not re-raster on scale change is similar, because it will make it faster." Can GSAP fix it for me? Yes and no. We've already experimented with the suggestions that the Chrome team made, but there are a few tricky challenges. First, we're hyper-focused on performance so it's quite painful to have this new Chrome behavior force us to add extra logic that must run on every tick of every tween of any transform-related animation. It probably wouldn't be noticeable unless you're animating hundreds or thousands of elements simultaneously, but we built GSAP to handle crazy amounts of stress because sometimes that's what a project requires, so we're pretty frustrated by Chrome's decision to impose this burden on animators and libraries like GSAP. But yes, we could do the toggling under the hood automatically and accept the performance tradeoff. But another major problem that's totally in the hands of browsers is rendering - we can't fix that jarring pixel shift at the end of the animations of background-image. We can force the 3D transform to remain, but as described above, that leaves things blurry and unnecessarily eats up memory. We work very hard to implement workarounds for browser inconsistencies and bugs like this, but we can't work miracles. We really need Chrome to step up and provide some better solutions. Conclusion There's no doubt that the Chrome team is working hard to move the web forward and deliver the best experience for their users. At GreenSock, Chrome is our primary browser that we use every day, so we're big fans. This article isn't intended to criticize anyone, but rather to bring attention to something that could spell big trouble for animators in the days ahead, beyond the headaches Chrome 53 caused with its new behavior. Hopefully Chrome will roll back the changes and/or implement some of the suggestions above. We'd encourage folks to make their voices heard (on the Chrome thread, below in the comments, with the spec authors, etc.). Perhaps we got something wrong - feel free to correct us or make other suggestions. Ultimately we want to help move animation forward on the web, so please join us.
  2. Good afternoon, colleagues! Please tell me, in the solution below, for some reason, sometimes there is a "twitching" of the line that moves. Moreover, in my project - it sometimes "jerks" (as if it starts anew every time, and does not continue) - uploaded to codepen.io - everything works stably, nothing jumps. With what it can be connected? How to make the RIGHT implementation? Sincerely.
  3. As the author of GSAP I'm sometimes asked if the Web Animations API (WAAPI) will be used under the hood eventually. My responses have gotten pretty long so I thought I'd share my findings with everyone here. Hopefully this sheds light on the challenges we face and perhaps it can lead to some changes to WAAPI in the future. WAAPI is a native browser technology that's similar to CSS animations, but for JavaScript. It's much more flexible than CSS animations and it taps into the same mechanisms under the hood so that the browser can maximize performance. Overall support is has gotten pretty good but there are multiple levels to the spec, so some browsers may support only "level 1", for example. The hope is that eventually all major browsers will support WAAPI fully. Progress in that direction has been very, very slow. You could think of WAAPI almost like a browser-level GSAP with a bunch of features stripped out. This has led some to suggest that perhaps GSAP should be built on TOP of WAAPI to reduce file size and maximize performance. Ideally, people could tap into the rich GSAP API with all of its extra features while benefiting from the browser's native underpinnings wherever possible. Sounds great, right? Unfortunately, WAAPI has some critical weak spots that make it virtually impossible for GSAP to leverage it under the hood (at least in any practical manner). I don't mean that as a criticism of WAAPI. In fact, I really wanted to find a way to leverage it inside GSAP, but I'll list a few of the top reasons why it doesn't seem feasible below. To be clear, this is NOT a feature comparison or a bunch of reasons why GSAP is "better" - these are things that make it architecturally impossible (or very cumbersome) to build GSAP on WAAPI. Custom easing WAAPI only supports cubic-bezier() for custom easing, meaning it's limited to one segment with two control points. It can't support eases like Bounce, Elastic, Rough, SlowMo, wiggle, ExpoScaleEase, etc. GSAP must support all of those eases plus any ease imaginable (unlimited segments and control points - see the CustomEase page). This alone is a deal-breaker. Expressive animation hinges on rich easing options. Independent transform components The most commonly animated values are translation (x/y position), rotation, and scale (all transform-related) but you cannot control them independently with CSS or WAAPI. For example, try moving (translate) something and then halfway through start rotating it and stagger a scale toward the end: |-------- translateX() ------------| |------ rotate() ------| |---------- translateY() -----------| |-------------- scale(x,y) --------------| Animators NEED to be able to independently control these values in their animations. Additive animations (composite:"add") probably aren't an adequate solution either. It's unrealistic to expect developers to track all the values manually or assume that stacking them on top of each other will deliver expected results - they should be able to just affect the rotation (or whatever) at any time, even if there was a translate() or scale() applied previously. GSAP could track everything for them, of course, but continuously stacking transforms on top of each other seems very inefficient and I imagine it'd hurt performance (every transform is another matrix concatenation under the hood). There is a new spec being proposed for translate, scale, and rotate CSS properties which would certainly help, but it's not a full solution because you still can't control the x/y components independently, or all of the 3D values like rotationX, rotationY, z, etc. This is an essential feature of GSAP that helped it become so popular. Example See the Pen Independent Transforms Demo by GreenSock (@GreenSock) on CodePen. Custom logic on each tick Certain types of animations require custom logic on each tick (like physics or custom rounding/snapping or morphing). Most GSAP plugins rely on this sort of thing (ModifiersPlugin, for example). I'm pretty sure that's impossible with WAAPI, especially with transforms being spun off to a different thread (any dependencies on JS-based logic would bind it to the main thread). Non-DOM targets As far as I know, WAAPI doesn't let you animate arbitrary properties of generic objects, like {myProperty:0}. This is another fundamental feature of GSAP - people can use it to animate any sort of objects including canvas library objects, generic objects, WebGL values, whatever. Global timing controls I don't think WAAPI lets you set a custom frame rate. Also, GSAP's lag smoothing feature requires the ability to tweak the global time (not timeScale - I mean literally the current time so that all the animations are pushed forward or backward). As far as I know, it's impossible with WAAPI. Synchronization (transforms and non-transforms) As demonstrated in this video, one of the hidden pitfalls of spinning transforms off to another thread is that they can lose synchronization with other main-thread-based animations. As far as I know, that hasn't been solved in all browsers. We can't afford to have things getting out-of-sync. Some have proposed that GSAP could just fall back to using a regular requestAnimationFrame loop to handle things that aren't adequately supported by WAAPI but that puts things at risk of falling out of sync. For example, if transforms are running on a separate thread they might keep moving while other parts of the animation (custom properties that get applied somehow in an onUpdate) slow down or jank. Compatibility GSAP has earned a reputation for "just working" across every browser. In order to deliver on that, we'd have to put extra conditional logic throughout GSAP, providing fallbacks when WAAPI isn't available or doesn't support a feature. That would balloon the file size substantially and slow things down. That's a tough pill to swallow. WAAPI still isn't implemented in several major browsers today. And then there are the browser inconsistencies (like the infamous SVG origin quirks) that will likely pop up over time and then we'd have to unplug those parts from WAAPI and maintain the legacy raw-JS mechanisms internally. Historically, there are plenty of cross-browser bugs in natively-implemented technologies, making it feel risky to build on top of. Performance WAAPI has a performance advantage because it can leverage a separate thread whereas JavaScript always runs on the main thread, right? Well, sort of. The only time a separate thread can be used is if transforms and/or opacity are the only things animating on a particular element (or else you run into synchronization issues). Plus there's overhead involved in managing that thread which can also get bogged down. There are tradeoffs either way. Having access to a different thread is fantastic even if it only applies in certain situations. That's probably the biggest reason I wanted to leverage WAAPI originally, but the limitations and tradeoffs are pretty significant, at least as it pertains to our goals with GSAP. Surprisingly, according to my tests GSAP was often faster than WAAPI. For example, in this speed test WAAPI didn't perform as well as GSAP on most devices I tried. Maybe performance will improve over time, but for now be sure to test to ensure that WAAPI is performing well for your animations. File Size To ensure compatibility, GSAP would need all of its current (non-WAAPI) code in place for fallbacks (most browsers won't fully support WAAPI for years) and then we'd need to layer in all the WAAPI-specific code on top of that like conditional logic checking for compatible eases, sensing when the user is attempting something WAAPI can't support, tracking/stacking additive animations, etc. That means file size would actually be far worse if GSAP were built on WAAPI. Some have suggested creating a different adapter/renderer for each tech, like a WebAnimationsAdapter. That way, we could segregate the logic and folks could just load it if they needed it which is clever but it doesn't really solve the problem. For example, some plugins may affect particular CSS properties or attributes, and at some point conditional logic would have to run to say "oh, if they're using the WebAnimationsAdapter, this part won't work so handle it differently". That conditional logic generally makes the most sense to have in the plugin itself (otherwise the adapter file would fill up with extra logic for every possible plugin, bloating file size unnecessarily and separating plugin logic from the plugin itself). So then if anyone uses that plugin, they'd pay a price for that extra logic that's along for the ride. Weighing the Pros & Cons At the end of the day, the list of "pros" must outweigh the "cons" for this to work, and currently that list is quite lopsided. I'd love to find a way to leverage any of the strengths of WAAPI inside GSAP for sure, but it just doesn't seem feasible or beneficial overall. The main benefit I see in using WAAPI inside GSAP is to get the off-the-main-thread-transforms juice but even that only seems useful in relatively uncommon scenarios, and it comes at a very high price. I'm struggling to find another compelling reason to build on WAAPI; GSAP already does everything WAAPI does plus a lot more. I'm hopeful that some of the WAAPI benefits will someday be possible directly in JS so that GSAP wouldn't have to create a dependency on WAAPI to get those. For example, browsers could expose an API that'd let developers tap into that off-the-main-thread transform performance. Ideally, browsers would also fix that hacky matrix()/matrix3d() string-based API and provide a way to set the numeric matrix values directly - that'd probably deliver a nice speed boost. Please chime in if I'm missing something, though. (Contact us or post in the forums) Why use GSAP even if/when WAAPI gets full browser support? Browser bugs/inconsistencies Lots of extra features like morphing, physics, Bezier tweening, text tweening, etc. Infinite easing options (Bounce, Elastic, Rough, SlowMo, ExpoScaleEase, Wiggle, Custom) Independent transform components (position, scale, rotation, etc.) Animate literally any property of any object (not just DOM) Timeline nesting (workflow) GSDevTools Relative values and overwrite management from() tweens are much easier - you don't need to get the current values yourself Familiar API Why WAAPI might be worth a try If you don’t need broad browser support today or any of GSAP’s unique features, you could save some kb by using WAAPI Solid performance, especially for transforms and opacity Always free Again, the goal of this article is NOT to criticize WAAPI at all. I think it's a great step forward for browsers. I just wanted to explain some of the challenges that prevent us from using it under the hood in GSAP, at least as it stands today. EDIT: Brian Birtles, one of the primary authors of the WAAPI spec, reached out and offered to work through the issues and try to find solutions (some of which may involve editing the spec itself) which is great. Rachel Nabors has also worked to connect people and find solutions. Although it may take years before it's realistic to consider building on WAAPI, it's reassuring to have so much support from leaders in the industry who are working hard to move animation forward on the web. 2020 EDIT: Now Brian Birtles is the only one working on WAAPI and he does so on a volunteer basis, so further development of WAAPI has understandably slowed down in recent times.
  4. Have you ever wondered why GreenSock doesn't use a more common open source license like MIT? Sometimes our licensing model is misunderstood but hopefully after reading this article you'll see why it is so foundational to the success of our tools and why it ultimately benefits users too. Open source is awesome! If you had to write and maintain your own jQuery or React or GSAP, could you do it? Would it be as refined and reliable? Probably not. How many times have you found a library or chunk of code on github that saved you hours of development time? Open source can be a quick path to great results. It also serves as a jumping-off point for up-and-coming developers to share their innovation, inspiring others to write better code. Open source is easily one of the most influential movements in the entire development community. We're certainly not anti-open-source. Quite the contrary. Open source is *not* always awesome Despite its strengths, there are some dark sides to open source. Many projects are riddled with bugs, poorly documented, and sometimes even dangerous to use. The web is littered with abandoned projects that once seemed promising. It's like a clearance bin you'd find at a discount store; there may be some treasures in there, but you'll have to dig through a lot of undesirables to find the gems. In an industry that's inundated weekly with "hot new" libraries, all just a click away for free, it can be tough to figure out which ones to gamble on. Commitment required(?) Some libraries are one-trick ponies that don't require ongoing commitment. For example, a math library that performs matrix operations or a formatting library. If the author abandons the project, it's no big deal. Other projects, like a full-featured animation library, are much more complex and leverage browser-level optimizations that are moving targets. Vendor-specific bugs need to be worked around or new browser features get introduced that beg to be animated. Plus, animation is the most performance-sensitive aspect of the user experience, so a commitment to ongoing optimization pays dividends. Beyond the code itself, what about community, documentation and examples? These become increasingly important as a library's feature set expands and companies standardize on it, needing to train new staff. Again, some projects have very little need for ongoing support but for GreenSock, it seemed essential. Hampered by success Paradoxically, success is the very thing that kills many open source projects because they don't have a funding mechanism to underwrite all the demands. The project that was once the twinkle in the author's eyes often ends up being a thorn in their side. They can't afford (or don't really want) to keep up with the demands. That's not to say that all open source projects suffer this fate. We have the utmost respect for open source authors, and we don't mean to diminish anyone's hard work or generosity. But there are some common frailties of open source projects in general that GreenSock aims to avoid. Having built and maintained a popular library for over a decade, we've learned that one of the most important factors in keeping a complex project like GSAP vibrant is the licensing model. Our goal was to make GreenSock tools not only accessible (which MIT does a great job of), but also sustainable, business-friendly, consistent, and respectful. We needed a license that would have the best chance of facilitating those goals, as described below: Sustainable Scores of animation libraries have come and gone over the years. We didn't want to be just another flash in the pan that ultimately leaves users feeling abandoned. Trust is paramount for us. We wanted the GreenSock brand to be associated with an exceptionally high level of commitment and reliability. Earning that trust requires a consistent, full-time effort so we considered these funding models: Self-funded - most open source projects are self-funded, meaning that the authors cover all the costs themselves, typically by donating their time and resources. It's noble and perfectly adequate for many projects. An MIT (or similar) license is a great fit because it allows an author to share code with almost zero strings attached. However, the commitment level tends to be unreliable and self-funded projects are rarely sustained long-term. With our particular project and goals, this wasn't a good fit. Corporate sponsorship - sometimes huge companies like Adobe, Microsoft, Facebook, or Google are willing to contribute funds to encourage a project. They usually have a vested interest in its success. For example, Adobe helps fund CreateJS which is a set of libraries that its very own Adobe Animate (formerly Flash) relies on. React is underwritten by Facebook. This can be a great way to offload development costs onto some deep-pocketed corporations, keeping it totally free for end users. The risk, however, is that if the project goals don't line up with the corporation's agenda (which can shift), funding gets yanked or the developers get pressured to go in a direction that may not be good for end users. If the company has a bad year, they may re-allocate resources. The project is ultimately beholden to a few key sponsors with potentially conflicting objectives. This wasn't something we were comfortable with although we acknowledge that it's a great solution for certain projects. Private investors - famo.us is probably the most well-known example of an "open source" project that took on private investors. A few years and $30,000,000+ later, it stopped development of its open source library and laid off a large chunk of its staff, deciding to "pivot" in a more profitable commercial direction. Private investors want a return on their investment (which is completely understandable) but an MIT license can make that very difficult because it leaves the IP (intellectual property) wide open for anyone to steal, including competitors. Similar to corporate sponsorship, projects who get their funding this way are beholden to a few key stakeholders who may have a very different agenda than end users. Again, this wasn't an ideal fit for GreenSock. User-funded - when funding comes directly from end users, it creates a wider base (more stable) and incentivizes the kind of innovation that end users actually care about, otherwise funding naturally dries up. That's a healthy dynamic for aligning agendas. Rather than serving a small set of outside investors looking for a return or a corporation with profit centers to bolster, end users play the central role. The downside, of course, is that certain features or use cases involve a fee. That can be a tough pill to swallow for some, especially when "free" MIT-licensed options abound. And it takes time to build up a wide base of users who literally "buy in". Trust and longevity are key. But a surprising number of users embrace this model because it allows them to "vote" with their dollars and participate in making a product they love continue to thrive. While this model certainly isn't for everyone, it has been a good fit for GreenSock. Accessible Sometimes we hear comments like "you can't use GreenSock in commercial projects without a license." WRONG. [loud buzzer noise]. Our license was designed to make the tools extremely accessible, even permitting usage in one-off commercial projects (where you get paid a one-time fee by a single client). And of course it's free to use in non-commercial projects (see licensing page for details). That way, a certain breed of power-user provides the funding that benefits everyone. An estimated 95%+ of our users never pay us a dime. So while a rare type of commercial project does require a "Business Green" Club GreenSock membership (which covers an unlimited number of projects while active), the vast majority of users never need that special license. This accessibility was a cornerstone of our approach. We didn't want to hide all the tools behind an intimidating paywall or make all commercial uses trigger a fee. Yet it couldn't be as unrestrained as MIT because that would create vulnerabilities for us and our users (as described in this article). Another way we keep the core tools accessible is by freely exposing the raw source code both on github and in our downloads. Users can peek under the hood and see how things work. That makes troubleshooting and learning much easier. We tried to strike a balance of openness and healthy insulation from the frailties of MIT. Business-friendly Most businesses are very concerned about IP infringement, lawsuits, and indemnification. The "no strings attached and no warranties" nature of MIT is both a strength and a weakness. It simplifies sharing, but what if an open source library leverages someone else's IP? What if it uses "copyleft" code that infects anything it touches, requiring that all projects using it be open-sourced as well? That could be a huge problem for businesses with proprietary tools in a competitive market. GreenSock's license doesn't have any attribution requirements, nor does it impose share-alike rules like GNU and Creative Commons. Furthermore, it contains warranties that aren't found in MIT-like licenses, making GreenSock more business-friendly. It survived the software audits and legal review process at reputable companies like Google, Sony, EA, Intel, every major ad network, and many others. This vetting is necessary in the business world where there is so much at stake. The license also allows for code edits to be made for bug fixes. Typically that's not necessary because we handle it as a part of our ongoing support efforts, but businesses appreciate knowing that if we relaxed those efforts, they'd still be able to get their project working if they ran into a bug and needed to squash it themselves. Consistent MIT-licensed projects spread on github where it's trivial to fork them and start making custom flavors. That's ideal for authors whose goal is to start something and then step back to let the community run with it. But that can lead to a lot of confusion as different flavors start popping up with incongruent feature sets and incompatibilities. Focus and direction are easily lost. If there's no driver (or too many drivers), it can be a scary ride for the passengers. We wanted the GreenSock brand to convey a certain level of consistency and reliability. That's part of the reason we don't generally accept pull requests - we keep a tight reign on the codebase so that we are intimately aware of every piece. That allows us to not only support it, but also stand behind our IP warranties. Respectful We believe that if we respect our users, they'll generally reciprocate that respect. Most people want to do the right thing. So we don't inject "phone home" code that reports usage or causes things to suddenly stop working when a membership expires. We don't force business customers register each user or enter serial numbers to activate seats. We don't limit installations or the number of projects that the license covers. We don't burn extra energy policing usage. We've never sued anyone. We funnel our energy into refining the tools, innovating, educating, and supporting our users. We put a lot of effort into creating a positive, respectful culture in the forums which boast over 50,000 members and 80,000 posts. We don't charge a dime for support there, and we don't shamelessly promote Club GreenSock memberships. Again, we trust that if we keep trying to provide value, people will notice and gladly sign up to support the efforts at some point. An unintended benefit of our licensing model has been that it naturally weeds out users who expect everything for free and don't recognize the substantial effort that goes into these projects. We're left with users who tend to be very respectful and trustworthy (exactly who we like to serve). Conclusion GreenSock isn't the typical open source project. Our licensing model reflects that. It offers a blend of sustainability, accessibility, business-friendliness, consistency, and respect that'd be very difficult (or impossible) to accomplish with an MIT-like license. It's not necessarily "better" (open source is fantastic, really) but in our particular case it aligned more closely with the project goals. In fact, many companies have chosen GreenSock because of the license, not in spite of it. They wanted to invest in a platform that had a certain level of commitment behind it that's rare in the open source world. After a decade, the experiment seems to be working and we have our supporters to thank for that. We've been humbled and inspired by Club GreenSock members throughout the years. It's a privilege to create these tools and serve the community. We're grateful for their partnership which makes it all possible.
  5. Before jumping into Club GreenSock for the super-cool bonus plugins, perhaps you're plagued by questions like: Will the bonus plugins work well for my project? How difficult is the API to work with? Will they play nicely with my other tools? Will they work in Edge? Firefox? ... That's why we created special versions of the plugins that can be used on CodePen anytime...for FREE! The video below shows how to get up and running fast. Video Demo with quick-copy URLs See the Pen Try Club GreenSock Bonus Plugins FREE on Codepen by GreenSock (@GreenSock) on CodePen. Template (fork this): See the Pen GreenSock Bonus Starter Template by GreenSock (@GreenSock) on CodePen. Of course we offer a money-back satisfaction guarantee with Club GreenSock anyway, but hopefully this helps give you even more confidence to sign up. CodePen is an online, browser-based editor that makes it easy to write and share front-end code. If you need help using CodePen check out their interactive editor tour.
  6. Note: This page was created for GSAP version 2. We have since released GSAP 3 with many improvements. While it is backward compatible with most GSAP 2 features, some parts may need to be updated to work properly. Please see the GSAP 3 release notes for details. Club GreenSock members get a fresh batch of superpowers with updates to SplitText and ScrambleText, plus two brand new customizable eases in this 1st-Quarter 2017 update! It's our way of saying "thanks" to those who support our ongoing efforts. Let's dig right into the upgrades... SplitText SplitText now supports nested tags like <span>, <strong>, <em>, <p>, etc. This means you can split text with various styling and even links. To make things even cooler, SplitText now supports emoji characters ! SplitText demo See the Pen BzmGba by GreenSock (@GreenSock) on CodePen. "Simply Green" gets ScrambleText We moved ScrambleText into "Simply Green" membership level to deliver even more value. NEW: CustomWiggle and CustomBounce Add wiggle and bounce effects to your GSAP animations with these brand new, highly customizable eases. Even get squash and stretch! CustomWiggle and CustomBounce are now included for anyone with a "Shocking Green" or "Business Green" membership. To learn more visit greensock.com/wiggle-bounce. Thanks We're incredibly grateful to each and every Club GreenSock member, as we simply couldn't do what we do without your ongoing support. It's a privilege to have such a top-notch community around GSAP. Please feel free to plug in even more by participating in the forums to help others or to get your questions answered. It's a fantastic place to learn and be inspired. Enjoy the fresh updates, and happy tweening! [loginwidget]
  7. GreenSock

    GSAP 1.19.0 Released

    Note: This page was created for GSAP version 2. We have since released GSAP 3 with many improvements. While it is backward compatible with most GSAP 2 features, some parts may need to be updated to work properly. Please see the GSAP 3 release notes for details. GSAP version 1.19.0 introduces some exciting new features for advanced users as well as conveniences for everyone (even the "greenest" novices). The most noteworthy improvements are summarized below: Function-based values Instead of a number (x:100) or string (width:"300px") or relative value (y:"+=50"), you can now define most values as a function that'll get called once for each target the first time the tween renders, and whatever is returned by that function will be used as the value. This can be very useful for randomizing things or applying conditional logic. See it in action in the demos below. See the Pen BzmGba by GreenSock (@GreenSock) on CodePen. ...and more GSAP 1.19.0 is more ES6-friendly (for example, you can npm install gsap and then import {TweenLite, Elastic, TimelineMax} from "gsap" in your project). Plenty of bug fixes too. See the whole list in the github changelog. DOWNLOAD GSAP TODAY Happy tweening!
  8. Note: This page was created for GSAP version 2. We have since released GSAP 3 with many improvements including a much smaller file size! Please see the GSAP 3 release notes for details. In recent months, the whirlwind shift to HTML5 in the banner ad industry has prompted a slew of policy changes. Publishers and networks are scrambling to answer questions from designers who want to build things "properly" (a term whose meaning can vary wildly these days). Growing pains abound. Shortly after we published an article describing the state of affairs and offering recommendations for a path forward, the IAB released a draft of their new HTML5 ad specs which echoed many of those recommendations. For example, the file size limit for standard ads was raised to 200kb. This was cause for much celebration across the industry. But we're not out of the woods yet. There is still a persistent mindset about how we look at kilobytes that's crippling the web. This article aims to challenge the old paradigm and explain why it's so important to re-assess kilobyte costs. Why limit kilobytes? Conventional wisdom says that kilobytes have a direct impact on load times and consequently user experience. File size limits exist to promote better performance. Period. Does conventional wisdom apply the same way in the HTML5 era? As we modernize our kilobyte-count policies, let's remember what the goal is: performance...or more accurately, better user experience. Not all kilobytes are created equal HTML5 has unique strengths that challenge us to move beyond the simplistic "aggregate total file size" mentality of yesteryear. We need to look at kilobyte cost in a new, more nuanced way. There are four primary factors: Cache status When is 35kb not really 35kb? When it's cached. A cached file has absolutely zero bandwidth cost regardless of its size. It loads immediately. This is particularly relevant in banner ads because there are certain chores common to almost every ad (like animation management tasks) that can be encapsulated and shared among many, many banners. The end user only loads that shared resource once and then it's cached and completely "free" thereafter...for all ads pointing at that file...on all sites. Does it really make sense to penalize ads for using those ubiquitous shared resources even though they're so pervasively cached that they don't typically affect load performance? In the modern world, file size limits should apply to the banner-specific assets that have a direct impact on loading times, not standardized shared resources. Location Kilobytes loaded from a CDN (Content Delivery Network) are typically "faster" because they're geographically dispersed and automatically loaded from the closest server. Plus CDNs have inherent redundancies leading to better reliability. Spread 200kb spread across 24 files will take longer to download than 200kb spread across only 4 files. This isn't particularly relevant in the discussion about shared resources (which should be cached very quickly), but is an argument in favor of loading TweenMax rather than the combination of TweenLite, CSSPlugin, EasePack, and TimelineLite even though they collectively use fewer kilobytes. If the industry remains focused solely on aggregate total file size, it pushes designers/developers toward less capable solutions that use fewer kilobytes even though they don't necessarily affect load times and could be replaced by more robust options that allow more creative expression. Performance yield Some kilobytes are cheap in terms of the initial load but expensive for runtime execution. If our goal is better user experience, this factor should weigh heavily into the overall kilobyte cost equation. Would you rather have a banner that loads 200ms faster with janky animation or one that's super-smooth at runtime but takes a fraction of a second longer to load? Are publishers primarily concerned with displaying ads faster initially or ensuring that their site runs smoothly once loaded? Of course there are reasonable limits either way (waiting an extra 30 seconds for a huge file to load would be intolerable even if it made things run buttery smooth), but in most cases we're only talking about fractions of a second difference. For example, GSAP contains "extra" code that automatically GPU-accelerates transforms, applies lag smoothing, leverages 3D transform matrices, avoids layout thrashing, organizes things internally to make auto overwriting super fast, etc. - would removing those features for the sake of milliseconds on initial load (and zero savings once it's cached) be a step forward or backward? Incentivizing the wrong things If the IAB and publishers don't embrace a common set of shared resources that get excluded from file size calculations... It creates inefficiencies and redundancies - thousands of ads may each contain their own [duplicate] copy of a library like TweenMax, squandering valuable bandwidth. It penalizes the use of robust, industry-standard libraries in favor of custom JS and micro-libraries that probably aren't nearly as capable, well-tested, cached, compatible, or performant. When something breaks, it will be more difficult for the various ad networks and publishers to troubleshoot and support custom JS and diverse micro-libraries. Lots of APIs to learn (or in the case of custom JS, more advanced expertise would be required). GreenSock would likely need to create a minimalistic version of TweenMax that has a tiny subset of the features. We feel strongly that this is a step backward and explain why here. So ultimately, an "all-inclusive" file size policy could actually hurt load times as well as runtime performance. Yet the primary goal of file size limits is to protect performance. Hm. If the industry rallies behind a few popular, well-maintained and performant libraries, exempting them from file size calculations because of their ubiquity, it would not only deliver a better overall user experience, but also make it easier to create high quality banners. There would be fewer headaches for networks and publishers too. TweenMax (GSAP's largest file) technically weighs around 34kb but those kilobytes are the remarkably inexpensive kind. TweenMax is widely cached, it's on multiple CDNs, it packs various tools into a single file (zero "spread"), and it has an extremely high performance yield because of its many runtime performance optimizations. There are so many ads using it already that it has little or no effect on load times. Should it really cost banner ad designers/developers 34kb against their file size budgets? Is it wise to incentivize cooking up their own custom code for handling animation tasks instead? The good news Every major ad network we’ve contacted understands the value of shared resources and is very GSAP-friendly. In fact, virtually all of them have GSAP on their own CDNs and don't count its file size against ads unless the publisher insists otherwise (which is rare). One notable exception is Adwords, but we have been told they're working on a solution. Allows GSAP Excludes GSAP from file size calculation* Hosts GSAP on CDN Advertising.com/AOL YES YES YES Google DoubleClick YES YES YES Flashtalking YES YES YES Sizmek YES YES YES Flite YES YES YES Cofactor YES YES YES Adwords YES YES YES *Unless publisher objects which is uncommon Google DoubleClick is even pioneering a process that will automatically detect when GSAP is used in an ad and swap in its own CDN link to maximize caching benefits. Pretty cool stuff. Conclusion Let's embrace the unique strengths of HTML5 and modernize the way we count kilobyte costs. Let's support policies that incentivize better performance and user experience rather than a race to the smallest total file size for each individual ad. Caching, CDNs, kilobyte spread, and performance yield should all factor into the way we view kilobytes the HTML5 era. FAQ Isn't it unfair if the IAB only recommends a few popular libraries? What about newer or lesser-known libraries? This is an entirely valid concern. The list should be reviewed regularly and the IAB can assess each library's industry support, performance profile, compatibility, and track record for ongoing updates and bug fixes. New contenders could be submitted for consideration anytime. But remember that the key to realizing the performance benefits is keeping the list short so that caching is focused and pervasive. If there are too many "standardized" libraries, it dilutes caching and defeats the purpose. There's no way that everyone will agree on which libraries should be on the list but if we get hung up on not offending anyone or being afraid to appear biased, we'll miss the opportunity to move the industry forward. The list won't be perfect, but not having a list at all is much worse. What happens when a library gets updated? Wouldn't it need to be re-cached? Yes. And trust me - we want libraries to be updated somewhat regularly to work around new browser inconsistencies, patch library bugs, and implement new features that drive things forward. But these updates wouldn't need to interfere with existing or legacy ads - when a library is updated, a new CDN URL would be generated and new ads could optionally point to that version. Those ads would indeed trigger browsers to cache that new version but this should happen very quickly. Most likely within a matter of days the new version would be pervasively cached across the web. Yes, each end user would pay that kilobyte tax once on the first load and then it would be "free" thereafter. Would GreenSock still recommend this policy if GSAP wasn't included in the short list of exempt libraries? Absolutely. This isn't just about getting GSAP an exemption - this is what we believe is best for the industry overall even if GSAP isn't on the list.
  9. Note: This page was created for GSAP version 2. We have since released GSAP 3 with many improvements. While it is backward compatible with most GSAP 2 features, some parts may need to be updated to work properly. Please see the GSAP 3 release notes for details. The latest version of GSAP delivers some fun new features that open up entirely new animation possibilities. Check out the videos and demos below that show what's so exciting about 1.18.0. Here's a quick summary: New "cycle" property allows you to add rich variations to staggered animations Relative HSL color tweens (affect just the hue, saturation or lightness) Complex string tweening like "1px 5px rgb(255,0,0)" Numerous improvements and bug fixes (see github) New "cycle" property for staggered animations Have you ever wanted to animate a bunch of elements/targets and alternate between certain values (or even randomize them) in a staggered fashion? The new super-flexible "cycle" property does exactly that. Instead of defining a single value (like x:100, rotation:90), you can define an Array of values to cycle through (like cycle:{x:[100,-100], rotation:[30,60,90]}) or even use function-based values (like cycle:{x:function() { return Math.random() * 200; }}). The amount of functionality you can pack into a single line of code is staggering (pun intended). Demo: array-based and function-based "cycle" values See the Pen Basic staggerTo() using cycle by GreenSock (@GreenSock) on CodePen. Advanced "cycle" effects with SplitText See the Pen SplitText with stagger and cycle by GreenSock (@GreenSock) on CodePen. Caveats The cycle property is available only in the staggerTo(), staggerFrom(), and staggerFromTo() methods in TweenMax, TimelineLite and TimelineMax. When using function-based values the function will be passed an argument which represents the index of the current tween. Inside the function body, the scope (this) refers to the target of the current tween (see source of first demo above). Relative HSL color animation Have you ever wanted to tween a color to something a little darker or lighter without having to guess at cryptic hex values? How about tween a hue to 180 degrees around the color wheel? With relative HSL tweening, it's easy. You can now use familiar relative prefixes ("+=" and "-=") directly inside hsl() strings! //30% darker backgroundColor:"hsl(+=0, +=0%, -=30%)" //to grayscale (0% saturation) backgroundColor:"hsl(+=0, 0%, +=0%)" //opposite color (180 degrees around the other side of the color wheel) backgroundColor:"hsl(+=180, +=0%, +=0%)" Relative HSL demo See the Pen Relative HSL color tweening in GSAP 1.18.0 by GreenSock (@GreenSock) on CodePen. Caveats Be careful about doing relative tweens when they could be interrupted. For example, if you have a mouseover that creates tween to +=30% and then a mouseout that does -=30% and then the user rolls over/out/over/out, you'll get odd results because of the nature of relativity. For bullet-proof rollover effects with relative values check out the demo we used in the video: Hover Demo with Relative HSL Values When you tween to a saturation of 0%, that basically loses any kind of hue data - the underlying color/hue of grayscale is non-existent. So then if you try tweening back to a saturation of 80% or something, it'll be red because that's the default zero position of hue. For example, tween a blue <div> to "hsl(+=0, 0%, +=0%)" and then to "hsl(+=0, 80%, +=0%)", it'll end up red instead of blue. That's not a bug - it's just the nature of colors in the browser (they end up in the rgb color space). Tween complex string-based values Complex string-based values containing multiple numbers can be animated without any extra plugins. For example, a value like "10px 20px 50px" can be animated to "4px 13px 200px". GSAP will find each number in the strings (in order), compare them and animate the ones that changed. CSSPlugin already does this for CSS values and it even converts units, but the base engine (TweenLite) can now do basic string tweening. It will even find rgba(...) values and make sure to round them appropriately during animation. This new feature extends to AttrPlugin too which means it can animate the complex strings inside SVG element attributes like the points in a <polygon> or <polyline> or even <path> data (please carefully read the caveats below). See the Pen Complex string-based tweening: simple shape morph by GreenSock (@GreenSock) on CodePen. Caveats This feature is NOT intended to perform complex shape morphing in SVG. It simply animates the numbers inside the strings (in order). Robust shape morphing requires dynamically parsing path data and injecting extra points in certain cases. This new complex string-based tweening lays the groundwork in the core to do a lot of advanced effects in the future, especially via plugins. If you're animating the "d" attribute of a <path> element or the "points" attribute of a <polygon> or <polyline> element, keep in mind that you MUST make sure the number (and type) of points match between the starting and ending values. And since those are attributes, use the AttrPlugin (which is already inside TweenMax). Community Demos City Construction by Sarah Drasner See the Pen City Construction Site by Sarah Drasner (@sdras) on CodePen. GreenSock Cycle by Petr Tichy See the Pen GreenSock - staggerTo with cycle by GreenSock (@GreenSock) on CodePen. Special Thanks This major update is extra special as it contains features that were largely shaped by feature requests and feedback from our community. We really appreciate the strong community that we have in our forums that not only helps each other, but also helps shape the tools themselves. Extra special thanks to Elliot Geno for suggesting cycle and relative HSL tweening, Diaco for being a testing powerhouse, and everyone who voted on the API changes. Now go download GSAP 1.18.0 and make something beautiful.
  10. I need to do a small "news ticker" type animation - it would just consist of text scrolling along a single line. I spent a considerable amont of time looking around for somethig like this, and all of the references I found were either old Flash-based things, or jQuery code that is many years old. Is there a simple way to use Tweenmax to create a horizontal text scroller? Ideally, I'd like it to just use a bunch of <li> elements as sequential text scrolls. TIA!
  11. Note: This page was created for GSAP version 2. We have since released GSAP 3 with many improvements. While it is backward compatible with most GSAP 2 features, some parts may need to be updated to work properly. Please see the GSAP 3 release notes for details. Have you ever tried getting a realistic wiggle effect or tweaking just how bouncy an ease is? What about adding squash and stretch to a bounce? These are not easy tasks. Well, until now. Even though CustomEase, lets you create literally any easing effect that you can imagine (bounces, wiggles, elastic effects, whatever) by drawing them, it's difficult to plot a complex wiggle or bounce while making sure all the points are spaced correctly. Wouldn't it be nice if you could just set a few parameters (like number of wiggles or bounciness) and have that complex easing curve created for you? Wish granted. CustomWiggle CustomWiggle extends CustomEase (think of it like a wrapper that creates a CustomEase under the hood based on the variables you pass in), allowing you to not only set the number of wiggles, but also the type of wiggle (there are 5 types; see demo below). Advanced users can even alter the plotting of the wiggle curves along either axis using amplitudeEase and timingEase special properties (see the docs for details). Demo: CustomWiggle Types See the Pen CustomWiggle Demo : resized by GreenSock (@GreenSock) on CodePen. Options wiggles (Integer) - number of oscillations back and forth. Default: 10 type (String) "easeOut" | "easeInOut" | "anticipate" | "uniform" | "random" - the type (or style) of wiggle (see demo above). Default: "easeOut" amplitudeEase (Ease) - provides advanced control over the shape of the amplitude (y-axis in the ease visualizer). You define an ease that controls the amplitude's progress from 1 toward 0 over the course of the tween. Defining an amplitudeEase (or timingEase) will override the "type" (think of the 5 "types" as convenient presets for amplitudeEase and timingEase combinations). See the example codepen to play around and visualize how it works. timingEase (Ease) - provides advanced control over how the waves are plotted over time (x-axis in the ease visualizer). Defining an timingEase (or amplitudeEase) will override the "type" (think of the 5 "types" as convenient presets for amplitudeEase and timingEase combinations). See the example codepen to play around and visualize how it works. How do you control the strength of the wiggle (or how far it goes)? Simply by setting the tween property values themselves. For example, a wiggle to rotation:30 would be stronger than rotation:10. Remember, an ease just controls the ratio of movement toward whatever value you supply for each property in your tween. Sample code //Create a wiggle with 6 oscillations (default type:"easeOut") CustomWiggle.create("myWiggle", {wiggles:6}); //now use it in an ease. "rotation" will wiggle to 30 and back just as much in the opposite direction, ending where it began. TweenMax.to(".class", 2, {rotation:30, ease:"myWiggle"}); //Create a 10-wiggle anticipation ease: CustomWiggle.create("funWiggle", {wiggles:10, type:"anticipate"}); TweenMax.to(".class", 2, {rotation:30, ease:"funWiggle"}); Wiggling isn't just for "rotation"; you can use it for any property. For example, you could create a swarm effect by using just 2 randomized wiggle tweens on "x" and "y", as demonstrated here. CustomBounce GSAP always had the tried-and-true Bounce.easeOut, but there was no way to customize how "bouncy" it was, nor could you get a synchronized squash and stretch effect during the bounce because: The "bounce" ease needs to stick to the ground momentarily at the point of the bounce while the squashing occurs. Bounce.easeOut offers no such customization. There was no way to create the corresponding [synchronized] scaleX/scaleY ease for the squashing/stretching. CustomEase solves this now, but it'd still be very difficult to manually draw that ease with all the points lined up in the right spots to match up with the bounces. With CustomBounce, you can set a few parameters and it'll create BOTH CustomEases for you (one for the bounce, and one [optionally] for the squash/stretch). Again, think of CustomBounce like a wrapper that creates a CustomEase under the hood based on the variables you pass in. Options strength (Number) - a number between 0 and 1 that determines how "bouncy" the ease is, so 0.9 will have a lot more bounces than 0.3. Default: 0.7 endAtStart (Boolean) - if true, the ease will end back where it started, allowing you to get an effect like an object sitting on the ground, leaping into the air, and bouncing back down to a stop. Default: false squash (Number) - controls how long the squash should last (the gap between bounces, when it appears "stuck"). Typically 2 is a good number, but 4 (as an example) would make the squash longer in relation to the rest of the ease. Default: 0 squashID (String) - the ID that should be assigned to the squash ease. The default is whatever the ID of the bounce is plus "-squash" appended to the end. For example, CustomBounce.create("hop", {strength:0.6, squash:2}) would default to a squash ease ID of "hop-squash". How do you get the bounce and the squash/stretch to work together? You'd use two tweens; one for the position ("y"), and the other for the scaleX/scaleY, with both running at the same time: //Create a custom bounce ease: CustomBounce.create("myBounce", {strength:0.6, squash:3, squashID:"myBounce-squash"}); //do the bounce by affecting the "y" property. TweenMax.from(".class", 2, {y:-200, ease:"myBounce"}); //and do the squash/stretch at the same time: TweenMax.to(".class", 2, {scaleX:140, scaleY:60, ease:"myBounce-squash", transformOrigin:"center bottom"}); See the Pen CustomBounce from GreenSock by GreenSock (@GreenSock) on CodePen. Where can I get it? CustomWiggle and CustomBounce are membership benefits of Club GreenSock ("Shockingly Green" and "Business Green" levels). It's our way of saying "thanks" to those who support GreenSock's ongoing efforts. Joining Club GreenSock gets you a bunch of other bonus plugins and tools like MorphSVGPlugin as well, so check out greensock.com/club/ for details and sign up today.
×
×
  • Create New...