Quantcast
Channel: Telerik Blogs
Viewing all 4112 articles
Browse latest View live

TeamPulse Online Demo Launched

$
0
0

We are happy to announce that you can now explore our agile project management tool TeamPulse online, without having to install it. The online demo is a full-featured version of the product updated with the latest TeamPulse release.

agile project management tools demo 

The demo site is the perfect place to begin your evaluation of TeamPulse if you are short on time or just eager to work on an agile project and see the product in action. The demo includes a sample project, with meaningful data to serve as guidance and support during your evaluation time.

Try TeamPulse Online >>

 

For those of you who prefer to take their time and explore TeamPulse more thoroughly you can download the 60-day trial. You will be happy to notice a significant increase in speed and memory usage thanks to the service pack we introduced last week.

We hope you like the work we’ve done to make your evaluation experience with TeamPulse faster and easier.

Looking forward to your feedback and comments, here or in our forums.


Telerik Automated Testing Tools Roadmap is Here

$
0
0

It is hard to believe that January is already behind us as we dive deeper into 2011. For the testing tools we saw an amazing year of product growth and innovation and I would like to share with you some big announcements from the testing tools division for the upcoming year.

The first big announcement is that we are going to have two major releases a year for Telerik automated testing tools in order to focus more on new product development and new feature development. The R1 release will be around the end of April and the R2 release will be around the end of September. We are very excited about this change because it will allow us to be more aggressive in features we tackle while maintaining a high level of quality and support.

Another exciting announcement is the addition of WPF test automation support in the coming R1 release. All of the great recording and maintenance features that are currently offered in Test Studio for web will be available for WPF automation. The WPF version will also include full support for the Telerik WPF control suite so that if your WPF application uses the Telerik controls testing will be even more productive.

As with all Telerik products we are always working on new ways to increase agility and productivity within your development environment. And as part of the R1 release we will offer this collaboration between Test Studio and TeamPulse. With a click of a button in Test Studio you will be able to automatically generate tests from your user stories or user acceptance tests that were created in your TeamPulse project.

Along with these exciting products and features be on the lookout for feature enhancements throughout Test Studio including test scheduling, test explorer, the results viewer and enhanced Silverlight automation.

Be sure to follow the testing tools blog or us on Twitter to stay up to date on all the exciting new features and announcements for the R1 release.

See full list of R1 features here

Lastly, check out Telerik CEO’s reflections on .NET world in his most recent blog on a very exciting new feature in JustCode, and an opportunity to score one of 500 free licenses of JustCode.

Best Regards,
Christopher

Telerik / NCover Webinar Now on Telerik TV!

$
0
0

I just wrapped up a Telerik / NCover joint webinar with Daniel Waldschmidt, NCover's Technology Evangelist.

What a fantastic session! We provided great demonstrations of Telerik's Automated Testing Tools including tests with 'If..Else Statements', 'Do...While Loops', 'Coded Steps' and the handy 'Test As Step' feature. We then packed these tests into two test lists - first running a baseline test list in WebUI Test Studio with NCover monitoring IIS, then followed it up with a more complete test list to see the spike in coverage.

Dan W. then took us on a tour of NCover which showed us how much of the application's code had been covered by WebUI Test Studio's automation - beyond that, NCover showed us which pages of the application lacked testing, having such information is incredibly powerful!

 

You do not want to miss this webinar - and I will ensure you don't as it has been added to Telerik TV and can now be viewed 24/7 at your leisure.

Watch Code Coverage with Telerik WebUITest Studio & NCover

Lastly - a big thanks to Stoich & Cody from Telerik for getting me in touch with the kind folks at NCover and providing the POC and KB Article - you guys rock!

 

Enjoy!

-Daniel Levy

 

 

 

 

TeamPulse R1 2011 is Here: Bug Tracking, Productivity Features

$
0
0

We are very excited to announce the official release of the R1 2011 version of our agile project management tool TeamPulse. This release expands the tool’s feature set with the addition of a Bug Tracking module, which allows users to record, triage, assign and evaluate bugs in an easy and intuitive way. We also wanted R1 2011 to be about increasing individual and team productivity. That’s why we introduced features like the “My Perspective” view, and the interactive TaskBoard, which give users more control over their daily work. Read through for the full list of new features.

Bug Tracking

As of this release, TeamPulse introduces its new Bug Tracking module, supporting the complete workflow of bug capturing, triaging (evaluating and prioritizing), and work around assignments.

TeamPulse GIDS 2011 award

Check out the video >>

My Perspective

The new personalized My Perspective, provides users with an immediate, personal perspective on assigned stories, tasks, and bugs. By aggregating everything that each user is working on into a single location, the tool allows team members to stay focused and be more productive.

TeamPulse GIDS 2011 award

 

TaskBoard

With R1 2011 we also introduce the new interactive TaskBoard, which allows tasks to be represented as index cards on a virtual whiteboard, and to be organized in different columns based on their status. Tasks can also grouped by rows with various filters. The TaskBoard is ideal for daily Scrum meetings, allowing teams to quickly review, move, and edit cards in real time.

In addition we also introduced My TaskBoard which is the same as the TaskBoard, but displays only the tasks assigned to the user viewing it.

A quick Quick add story option for creating new stories has also been added to the StoryBoard and TaskBoard allowing team members to create stories right from within those views and never leave the context.

TeamPulse GIDS 2011 award

 

Tracking of Risk and Issues

This new functionality allows users to record and track risks and issues related to the project so that the team is better prepared for change.

TeamPulse GIDS 2011 award

 

Capture Customer Feedback and Request

In addition to ideas with R1 2011 users will be also able to capture customer feedback and incorporate it in the product's development plans.

TeamPulse GIDS 2011 award

 

Search for items by ID

Small but valuable new functionality is the ability to search project items by ID and quickly navigate to them from the search results.

quick find

 

Create New Project Wizard

Users of the latest TeamPulse version will be able to benefit from the new Create Project Wizard, which will guide them through the process of creating their first agile project.

TeamPulse GIDS 2011 award

 

New Pack of Best Practices and Reports

New best practices and reports are shipped with this release to track key agile metrics like velocity, iteration burndown, estimate by area and more. They can be also used to measure the level of agile adoption by the team as well as the health of the project.

Hosted Trial

Along with the new TeamPulse features with R1 we also launched a 60-day Hosted Trial of TeamPulse. You can now create a dedicated account which will be stored on our servers for 60 days and will let you evaluate TeamPulse without the need to install it.

Create your hosted trial >>

and

Register for the What’s New Webinar for a more in-depth overview of the new features.

Why Would you Want to Write Applications for Windows 8 Now?

$
0
0

Introduction

Last week, we had our first Windows 8 webinar titled, “Why build for Windows 8 and how RadControls for Metro can help.” One question that we attempted to answer is, “Why build for Windows 8 now?” “Why not wait until a future date when the platform is more stable?” These questions are valid from a consumer and an enterprise point-of-view and I’ll try to explain why we believe that you should start writing applications for Windows 8 today.

 

4 Solid Reasons to Start Building Today

Reason #1: You are building for the next generation of the most popular operating system in the world (which is Microsoft Windows)

Let me begin with this quote from ZDNet, “If there are 600 million copies of Windows 7 in use, and this represents 40 percent of the market, then simple arithmetic says the installed base is now 1.5 billion machines. The vast majority of them -- 92.53 percent or 1.4 billion -- run Microsoft Windows.” [source]

While we do not know how many copies of Windows 8 will be sold, we do know that Windows 7 is the best-selling operating system in history, and Windows 8 is its successor. Windows 8 also allows you to have the best of two worlds. You can choose between the classic “Desktop” mode or the new “Metro” mode. You can still run the same applications on Windows 8 that you can run on Windows 7. You still have your preferred method of input, whether it be mouse/keyboard or touch-screen devices. In other words, nothing was removed, but features were added.

Microsoft is investing heavily in the operating system and while it may take some time to catch on, you can get a head start by working with it today.

Reason #2: There is no denying that the age of the slate / tablet PC is here right now.

We’ve been hearing it since 2010, your next PC will be some sort of slate or tablet device.

Times have changed, we used to be stuck to our office desk because we were using a Desktop computer and if we wanted to “work from home” that meant either physically brining the box to your home (hopefully, you wouldn’t do this), adding files to a USB thumb drive or connecting via VPN.

Then came the laptop, finally we could take our work with us anywhere we wanted. The main issues that came with laptops were that they were originally heavy and could not match the specs of a powerful desktop PC. You could pretty much forget replacing a part if it failed. As time went by, laptops became smaller and lightweight but not everyone wants to drag a laptop around to watch a movie or do some basic internet surfing.

This is where we started seeing devices such as the iPad take off. The iPad was a phenomenal success and we expect the same for the recently announced Microsoft Surface slates. Windows 8 was built touch-first, meaning they are expecting users to be using the Operating System with such devices.

Reason #3: You can write your application in a language that you already know.

More than likely if you are reading this blog then you have probably done some .NET development at some point in your career. Windows 8 presents developers with the opportunity to build Metro-style applications using the language of their choice: HTML5 with JavaScript and CSS3, or XAML with C# , Visual Basic or Visual C++.

So what language do you choose?

I think Jesse Liberty put it best here when he says,“Microsoft’s guidelines are to go with what you know – if you are already a XAML programmer, by all means invest in XAML for Windows 8. If you already are a JavaScript programmer, then follow Javascript to Windows 8. The folks who I know who are doing both say they are more productive with XAML, but of course HTML5 and JavaScript are very hot technologies right now.”

Reason #4: You can sell your application easily in the Windows Store or deploy it to your enterprise.

“First to Market” – is a very popular phrase meaning that you have the advantage of being the first person in an ever expanding marketplace. The Windows Store makes your application available to millions of customers with minimal effort on your part. You package your application and upload it in a similar manner as you did with Windows Phone 7. You set the price and what markets you want your application distributed to, and Microsoft does the rest.

Microsoft also has deployment strategies for Enterprise customers who wish to deploy their application internally but not make it available to everyone else in the Windows Store. These types of applications frees enterprises with worries of security breaches, as the application can only be downloaded by selected individuals.

 

Wrap-Up

Thanks for reading and I hope that you have a clearer understanding of why building for Windows 8 at this stage is very important. I’d also suggest that you watch the recorded webinar to see exactly what Telerik has in store for Windows 8. You may also download RadControls for Metro here.

I’ve said this in all of my blog post so far, but I am always open to any type of feedback that you may have. Telerik is driven by customer feedback. So, feel free to reply to this post or send me a tweet.

TeamPulse R6 2012 is here: Customer Feedback Portal updates

$
0
0

The last release for 2012 is here just in time for the holidays. It brings new features to our Customer Feedback Portal.

Private Feedback

With this release you can allow each user to submit feedback that is visible only to the author and your team. This feature is especially useful for cases where you work with several independent clients that are part of the same code base.

You can choose between two options for capturing private feedback:

  • Allow users to choose whether they want to submit an item as private or public
  • Make all submitted items private by default

feedback portal private conversations

Status Filters

To allow faster navigation through all the feedback we now allow users to filter portal items by their status and see only items that are Done, In Development or New and other.

It allows you to gather feedback from multiple users and you don’t want them to see each other’s feedback you can enable the private feedback feature. This way only your team and the author of the private item will be able to see it.

status filters


Team Foundation Service Support

TeamPulse now supports bi-directional synchronization of work items with Team Foundation Service.

free TeamPulse trial





P.S. Telerik recently adopted the TeamPulse Ideas & Feedback Portal as the default feedback gathering tool for its major product lines. Check the new portal out >>  

Get ready for the goodness inside Telerik OpenAccess ORM Q1'13

$
0
0
We are releasing Q1 2013 within days and many of you have already asked what’s coming with Telerik OpenAccess ORM? That's why I've decided to shed some light on how the new version will help you achieve your goals even quicker than the previous ones.

First and foremost, as you are constantly dealing with bigger and bigger database schemas, we really wanted to make your life easier. Thus, we implemented for you the capability to split your model into many diagrams the Visual Designer. With its help, you will be able to define groups of similar or connected Domain Classes and manage the model complexity much faster. To speed up the process we have developed the Include Related and Include Hierarchy commands which automatically make classes associated with a particular class a part of the same diagram. Furthermore, the Add New Domain Model wizard will offer you the option to create a diagram for each database schema as this is one of the natural distributions of classes you might need. Don't worry about migrating your old models - the code remains completely unchanged regardless of the diagrams you decide to use!

A feature that you do not usually find in Object Relational Mappers is the support for data streaming. This is something you will be able to use "out of the box" with Q1 2013 of OpenAccess – both for reading and for writing large varbinary columns through streams. You can even update or append bytes to the current value by just setting a simple flag. To make it more flexible, we are still offering you the old approach of using byte arrays to map such columns, but now you can also switch your mapping to a new OpenAccess type – BinaryStream – and start streaming large data arrays right away!

Another unique feature of OpenAccess you cannot frequently encounter is the dynamic definition of types and properties during runtime. These Artificial Types are now used easier than ever for your CRUD operations, through a new API available from the OpenAccessContext in Q1 2013! 

For the web developers among you, we have provided a while ago an ASP.NET Dynamic Data wizard, allowing you to create custom dynamic pages for each of your entities, with full CRUD capabilities. Now you can see that wizard using the DynamicRadGrid– an implementation of ASP.NET AJAX RadGrid allowing you to develop your application faster and in more convenient manner using Dynamic Data.

Many of you are using different versions of WCF Data Services, so we have decided to introduce support for 5.2 and in fact - any other version - by providing you with the sources for our oData implementation. When using Add OpenAccess Service wizard for generating an oData service, it will import several code files instead of assemblies. While the code will still work "out of the box", if you would rather prefer to use an older version of WCF Data Services (or for instance the release candidate for 5.3), you can avoid any possible roadblocks when implementing your N-Tier scenarios by just building the code against the release of your choice!

And finally, the OpenAccess SDK now comes with a new name – OpenAccess Samples Kit. Not only the name is changed – we have updated plenty of the examples and added several new ones! 

Stay tuned as we will continue unveiling more information on the new goodness inside the Q1 2013 that's coming soon!

Webinar Registration

Q1 2013 Webinar Follow Up – Enhance Your WinForms LOB Applications with PDF Viewer, PivotGrid, and Reporting

$
0
0

INTRODUCTION

Thank you to everyone that joined Michael and I for the Webinar today.  We hope that it has motivated you to use some of these new enhancements in your applications today! 

As promised in the webinar, the slides, source code and recorded video are now available for your enjoyment!  If you have any questions feel free to email or tweet to us! 

Remember the WinForms Control Suite and Telerik Reporting are both available as part of the awesome value that is the DevCraft Bundle.  The DevCraft Bundle has 12 products that can help you become more productive in your development work!  For the price of two products, you can have the entire collection! – You need check it out!

WEBINAR MATERIALS

Slides, Source Code and the Recorded Webinar Video are now available for you to enjoy!

WHAT DID WE COVER TODAY?

RadChartView has been enhanced in Q1 2013 with features such as:

  • drill down support
  • multiple axis support
  • smart label support
RadChartView

RadPivotGrid has now been officially released!  It has also been further enhanced with full OLAP support with KPIs (Key Performance Indicators) as well as Sorting and Filtering of Cube data.

RadPivotGrid

RadPdfViewer joins the cast of WinForms controls allowing you to load and interact with PDF documents from your hard drive, network share, URL or from an in memory stream.  The use of RadPdfViewer in your applications does not rely on any additional third party applications!  You have the ability to Open documents, Zoom, Search for text, Print, and navigate PDF documents when you associate the Viewer with a RadPdfViewerNavigator control.

RadPdfViewer

The Visual Studio 2012 Dark Theme has been added, giving your applications the visual appeal of Visual Studio!

Visual Studio 2012 Dark Theme 

QUESTIONS AND ANSWERS

I’ve compiled a list of some of the unanswered questions from the Q&A.  Please feel free to reach out to me at any time if you additional questions.

Q:

Is there support for a Stacked bar chart?

A:

We do support a stacked bar chart - you can find an example in the "First Look" ChartView example in the Demo Application that is installed with the WinForms tools.

Stacked Bar Chart
You can also see information on Stacked Bar Charts here: http://www.telerik.com/help/winforms/chartview-series-types-bar.html

Q:

Is there printing support from the RadPivotGrid

A:

Yes! Printing support is available in the RadPivotGrid, you can see an example in the WinForms controls Demo application as well as here: http://www.telerik.com/help/winforms/pivotgrid-printing-support.html

Q:

Is it possible to load a PDF from BLOB data from SQL Server into the RadPdfViewer?

A:

Yes! Follow the example code provided in today’s Webinar and load the BLOB into the RadPdfViewer as a Stream!

Q:

Does the PDF Viewer have a PDF Signing Interface?

A:

No, not at this time, the PDF Viewer is strictly a viewer.  I will let the WinForms team know that this functionality has been requested.

 

WRAP-UP

We hope you enjoyed seeing the enhancements and additions to Telerik’s RadControls for WinForms for Q1 2013!  Please reach out to us and tell us what you would like to see in the future.  Whether it be new controls or features, we are here to help get your message heard, and we have awesome product teams that like to make sure that our products Deliver More Than Expected!

Download RadControls for WinForms by Telerik


Announcing RadControls for Windows Phone Q1 2013 SP1: Improved Chart, Calendar Enhancements and Much More

$
0
0

It’s been a while since we had our official Q1 2013 release and what a better way to show you what we've been working on than a Service Pack. Q1 2013 SP1 comes as a milestone in the development of our Chart control as it introduces a ton of new functionality (multiple axes, empty values, annotations etc.). In addition, Calendar now supports different date formats and can display the date in different calendars such as Hebrew, Hijri and many others. And finally, we've improved the integration between the DataBoundListBox and the Expander even further.

With support for multiple axes you can now present series of a different scale on the same graph.



You can now use annotations to highlight certain areas on the plot area and denote statistical significance.



With support for empty values you can now bind the Chart to any object or collection of objects whose property is null.



RadChart now supports RangeBar series as well.



With support for various date formats in you can now display the date in different calendars such as Hebrew, Hijri and many others.



As usual, many fixes of issues reported by you are also included in Q1 2013 SP1.

Go ahead and download the new bits and let us know what you think!

Q3 2015 Official Release for WPF and Silverlight controls

$
0
0

We are happy to announce the arrival of the latest Telerik WPF and Silverlight components in the last major release for this year—Q3 2015. A detailed list of all new features can be found in our What’s New sections for WPF and Silverlight. All other details are included in our thorough release notes here for WPF and here for Silverlight.

New Controls

DesktopAlert (Official)

RadDesktopAlert for WPF will display a small pop-up window on the desktop to notify the user that a specific event in the application has occurred.

TimeSpanPicker (Beta)

The new RadTimeSpanPicker for WPF enables your end users to easily pick timespan values in desktop applications. The control provides full control over time and duration in any desktop app.

 TimeSpanPicker_screenshot2

New Features

DataServiceDataSource

Support for NuGet package which targets OData v4. Telerik.Windows.Controls.DataServices60 assembly is built against OData v4 corresponding binary—Microsoft.Data.Services.Client. Its version is 6.13.0 and all a customer needs to do is to update the references to be the same latest version of the OData assemblies.

ImageEditor

Introduced support for GIF and TIFF import/export and ICO import.

PdfProcessing

Introduced API for exporting PDF documents to plain text.

RibbonView for WPF

Now RadRibbonView supports keyboard navigation where the active element is highlighted once the KeyTips are activated. This feature allows users to navigate through all items inside the RadRibbonView simply by using the ArrowKeys (Left/Right/Top/Down). Users also can execute the action associated with the element which is currently focused by using the Space/Enter keys.   

 Ribbon_screenshot

TreeView

Implemented Multiple Selection mode which is consistent with the same mode in RadListBox, RadGridView, etc. Multiple Selection refers to Selecting/Deselecting items with a single ​mouse click or a single space key press. Until Q3 2015 Extended and Multiple Selection had the same behavior.

WordsProcessing for WPF

RTF Format Provider: Implemented omitting of color definitions in the color table, if a color is set to "auto" or "transparent."

HTML Format Provider: Introduced support for downloading image data only on demand when importing an image with URI source.

You can download the latest bits for WPF and Silverlight and see everything by yourselves. You can enjoy our WPF and Silverlight demos and share your thoughts through our Forums or our Ideas & Feedback portal

10 Time-Saving CSS Tips I Learned the Hard Way When Using Sass

$
0
0

These top 10 CSS best practices will help you save time and collaborate better with your team.

We sometimes think that we know all we need to know about SCSS and that we can give that extra time to get ahead on JavaScript.

I’m sorry to be the one breaking this to you, but you should pay more attention to your stylesheets. I’ve worked in projects where the code turned into spaghetti just because a few simple best practices weren’t applied. I quickly learned how precious some good tips can be when working with other people on a code base that can become quite large in no time.

That’s why, today, I’m sharing with you 10 SCSS best practices that will help you and your team.

Start using them, your teammates and the people who’ll later take over your code will thank you. (By the way… that’s one of the few ways you get extra points in the good place. )

Tip #1: Adopt the BEM Convention

Have you ever gotten into a project and didn’t know how to start reading or making sense of the CSS classes naming?

Yeah ‍♀️we’ve all been there. That’s why whenever I start a new project or join one, one of my first code style optimizations is implementing BEM and making sure that everyone follows it.

BEM stands for Block, Element, Modifiers. The added value that this CSS classes naming convention brings to the table is simple: it allows you to visualize how your template is styled in a structured way.

How it works is even simpler:

  1. You name the main blocks of your page like this for instance: class="button".

  2. Then, name the elements inside each block using two underscores (to show that it’s part of the block): class="button__icon".

  3. And in case you have a variant of that block, use two dashes to name a modifier: class="button button--red".

So in your template, it will look like this:

<buttonclass="button button--blue"><imgclass="button__icon"src="http://www.bem-br.org/img/logo.png"alt="icon-blue"/><pclass="button__label">Use BEM</p></button><buttonclass="button button--red"><imgclass="button__icon"src="http://www.bem-br.org/img/logo.png"alt="icon-red"/><pclass="button__label">Please use BEM</p></button>

Editing your styles will become much easier, because you’ll be able to visualize the structure of your code:

.button {border: none;margin:20px;cursor: pointer;.button__icon {width:20px;height:20px;}.button__label {color: white;padding:20px 40px;text-align: center;text-transform: uppercase;font-size:16px;}// --> MODIFIERS: COLORS <--&--blue {background-color: blue;}&--red {background-color: red;}}

To learn more about BEM: MindBEMding – getting your head ’round BEM syntax.

Tip #2: Don’t Repeat Yourself, Use Variable Extrapolation for Your Block Class Names

If you follow the BEM convention (or are going to), here is another good practice you can follow to speed up your development time: using variable extrapolation. This way, you will not repeat yourself.

It’s pretty simple — you just define your block class in a variable (in the example above it was .button) and replace it using #{$c} in your CSS code.

Let’s use the example above, to see how it works:

$c: “.button” 

#{$c}{border: none;margin:20px;cursor: pointer;&--blue {background-color: blue;}&--red {background-color: red;}#{$c}__icon {width:20px;height:20px;}#{$c}__label {color: white;padding:20px 40px;text-align: center;text-transform: uppercase;font-size:16px;}}

It’s a small and simple improvement, but not having to rewrite your block class name every time (or just being able to update it in a single spot) speeds things up, improves code readability and makes the structure of your CSS code pop out.

Tip #3: Structure Your Project With InuitCSS

You can think of InuitCSS as a CSS framework. Even though it does not provide you with UI components or anything like that.

Instead, InuitCSS helps you normalize, configure, homogenize and structure your stylesheets.

Sounds abstract? Okay, let’s see how it does that.

First, go ahead and install InuitCSS using npm install inuitcss --save. Now all you have to do is get to know the InuitCSS-specific CSS directory structure that it provides and follow it to set structure you project’s assets:

  • /settings: This is where all your global variables, site-wide settings and configs go. For example, instead of declaring colors variables in every one of my stylesheets, I just put them and organize them in one single file under this folder.

  • /tools: The tools folder is where you define your project mixins and functions, most of the time, I use it to store the Sass mixin I use for responsive media queries.

  • /generic: Here, you can specify low-specificity CSS rules, like normalize.css and reset rulesets.

  • /elements: When you need to style unclassed HTML elements like links, pages, images, tables, and so on, you can simply create a stylesheet in this folder for that.

  • /objects: The objects folder is where you put your objects, abstractions, and design patterns like your layouts.

  • /components: This is where the style of specific UI components goes. Honestly, I never use it, simply because I code my projects with Vue.js and it uses single file components.

  • /utilities: The utilities folder is for all the high-specificity, very explicit selectors like the animations you need to use in your project.

It’s pretty neat, I know!

Tip #4: Use Datasets to Group Your Colors

If you’re using Sass loops, I definitely recommend using datasets, especially if it involves defining colors.

Let’s check this one in play by taking the example of social buttons. As you can probably guess, social buttons (Facebook, Twitter, etc.) have different colors.

So instead of having to write this:

// VARIABLES$c:".c-social-button";#{$c}{border: none;border-radius:4px;color:$white;user-select: none;cursor: pointer;// --> NETWORKS <--&--facebook {background:#3b5998;}&--google {background:#db4437;}&--messenger {background:#0084ff;}&--twitter {background:#1da1f2;}}

You can choose a more elegant way:

// VARIABLES$c:".c-social-button";$networks: facebook, google, messenger, twitter;// THE DATASET FOR SOCIAL COLORS$socialColors:(facebook:#3b5998,
google:#db4437,
messenger:#db4437,
twitter:#1da1f2);#{$c}{border: none;border-radius:4px;color:$white;user-select: none;cursor: pointer;// --> NETWORKS <--@each$network in $networks{&--#{network}{background:map-get($socialColors, $network);}}}

Tip #5: Adopt Veli’s Colorpedia Naming Convention

If your color naming convention is light-pink clap your hands. If your color naming convention is dark-blue clap your hands. If your color naming color convention is medium-grey, clap your hands.

Okay, you get it and you know it: using terms like light, dark, medium and so on as a naming convention for your project colors is very limiting, simply because there are some projects where you’ll have a lot of color and this is not going to take you very far.

Instead of scratching my head about this one every time, I simply use Veli’s colorpedia. This way you’ll get to give your colors names that a human can understand while not being limited by the light/medium/dark spectrum.

Additional perks come with using Veli’s colorpedia Veli’s colorpedia— it provides you with matching colors and even tells you how a colorblind person sees it.

Some designers are just angels sent from heaven.

Tip #6: Avoid Using Mixins Everywhere

When you don’t have to use mixins, just don’t do it! Why?

Because when you use mixins, they have to be well-structured and maintained in a rigorous way. Using mixins for no good reason is the best way to get lost when the project grows. They can cause side effects and become hard to update when they are used in many places. So use them carefully.

If you don’t know whether and when to use a mixin, remember this one rule: Mixins are here to avoid repeating yourself by keeping a Single Source of Truth.

Also, as of today for example, we don’t have to use mixins to prefix CSS properties because we have plugins like PostCSS Autoprefixer that exist and do the heavy lifting for you.

Tip #7: Supercharge Your Media Queries with SASS MQ

Sass MQ is an open-source mixin crafted by developers working at The Guardian (fancy!). It’s amazing for two reasons:

  • It compiles keywords and px/em values to em-based queries, so when users zoom on your page, the content doesn’t get all scrambled up.

  • It provides fallbacks for older browsers like IE8.

It simply works by compiling this code:

$mq-breakpoints:(mobile:320px,
tablet:740px,
desktop:980px,
wide:1300px
);@import'mq';.foo {@includemq($from: mobile, $until: tablet){background: red;}@includemq($from: tablet){background: green;}}

Into this:

@media(min-width:20em)and(max-width:46.24em){.foo {background: red;}}@media(min-width:46.25em){.foo {background: green;}}

Elegant, simple and useful. What’s not to like?

To start using it, just go ahead and follow the instructions on their GitHub page.

Tip #8: Use CSSComb

One more final thing to get you a clean CSS code. I know that each one of us has our own way of writing CSS code, but doing so will leave you steps behind when working with somebody else or a team on a project.

That’s why I use CSS Comb. I installed the extension on VSCode, and every time I start a new project I set a .csscomb.json file in its root.

This .csscomb.json file includes a configuration that transforms your CSS code and your teammate’s into one single format whenever you run the extension.

You can use my own CSS Comb configuration below, or configure your own just by choosing the way you want your CSS code to look.

Tip #9: Using Placeholders Can Often Be a Great Tool

In a project, I have a set of properties that define a dark background. I very often find myself having to type them over and over again. Here is how using a placeholder can come very handy:

// The placeholder selector%darkbg {border: 1px  solid  #000000;background: #101010;box-shadow: 0  1px  5px  0  rgba(#404040, 0.6);}.my-dark-block-for-errors {@extend %darkbg;// Some other properties for errors}.my-dark-block-for-success {@extend %darkbg;// Some other properties for success}

This will compile into the following css code:

.my-dark-block-for-errors, .my-dark-block-for-success {border: 1px  solid  #000000;background: #101010;box-shadow: 0  1px  5px  0  rgba(#404040, 0.6);}.my-dark-block-for-errors {// Some other properties for errors}.my-dark-block-for-success {// Some other properties for success}

Notice how it made our two blocks extend the placeholder? No need to repeat ourselves now and to remember these properties anymore.

Tip #10: Take a Few Minutes to Browse Awesome-Sass.

Awesome-Sass is a curated list of awesome Sass and SCSS frameworks, libraries, style guides and articles. It is a fantastic resource that keeps getting updates as of today. It includes so many interesting resources and it will deepen your Sass skills just by browsing it for a few hours.

For instance, this is where I discovered the Sass Guidelines or Sassline.

I hope this article was useful. Sass will definitely save you time, and I also believe it made me a better developer. You can follow me on Twitter at @RifkiNada if you want to share more tips with me.


This post has been brought to you by Kendo UI

Want to learn more about creating great web apps? It all starts out with Kendo UI - the complete UI component library that allows you to quickly build high-quality, responsive apps. It includes everything you need, from grids and charts to dropdowns and gauges.

KendoJSft

Build a Mini Vue Task Scheduler with the Kendo UI Scheduler Component

$
0
0

Learn how to build your own task scheduler in Vue.js using the Kendo UI Scheduler component. We create a Vue project and implement the Kendo UI scheduler to demonstrate the setup process and explain how to build the scheduler in Vue.js.

On average, we embark on two or three unplanned events daily. It could be in the office, at home, even at coffee shops. A friend could easily bump into you, and before you know it, you guys are heading to a place you didn’t know you’d go five minutes ago.

This is why task schedulers are important to keep us focused on what we must do, even in the face of increasing distraction. With a task scheduler, all you need to do is open your schedule and see what your next task is and what time you have scheduled to get it done.

They help us schedule specific tasks, and set them to be completed at specific times. This is a good way to check ourselves and organize our tasks in a rather simple manner to increase efficiency and improve productivity. In this post, we will demonstrate how you can build one for yourself using Vue.js and the Kendo UI Scheduler component.

Set Up a Vue Project

First, we have to create a Vue.js project with which we can demonstrate the implementation of our task scheduler. Without further ado, open a terminal window on your preferred directory and run the command below:

$ vue create scheduler-demo

If you don’t have Vue CLI installed globally, please follow this guide to do so and come back to continue with this lesson afterward.

When you’re done bootstrapping your Vue application, change into the new Vue application directory and start the development server.

$ cd scheduler-demo
$ npm run serve

This will serve your Vue application on localhost:8080. Navigate to it on your browser and you will see your Vue application live.

vue-app

Add Kendo UI to the Project

Next, let’s add Kendo UI to our new Vue project. For the scope of this demonstration, we’ll need:

  1. The Kendo UI package
  2. The Kendo UI default theme package
  3. The Kendo UI Dropdown wrapper for Vue

To do that, open a terminal window in the project’s root directory and run the commands below:

  // Install Kendo UI vue package
$ npminstall --save @progress/kendo-ui
  // Install Kendo UI dropdown wrapper for vue
$ npminstall --save @progress/kendo-scheduler-vue-wrapper
  // Install Kendo UI default theme package
$ npminstall --save @progress/kendo-theme-default
  1. Finally, we add the necessary Kendo UI packages from the CDN service. Open the index.html file in the public directory and add this snippet within the <head> tag:
<!-- public/index.html --><!--Load Kendo styles from the Kendo CDN service--><linkrel="stylesheet"href="https://kendo.cdn.telerik.com/2017.3.913/styles/kendo.common.min.css"/><linkrel="stylesheet"href="https://kendo.cdn.telerik.com/2017.3.913/styles/kendo.default.min.css"/><!--Load the required libraries - jQuery, Kendo, Babel and Vue--><scriptsrc="https://code.jquery.com/jquery-1.12.4.min.js"></script><scriptsrc="https://kendo.cdn.telerik.com/2017.3.913/js/kendo.all.min.js"></script><scriptsrc="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.6.15/browser-polyfill.min.js"></script><scriptsrc="https://unpkg.com/vue/dist/vue.min.js"></script><!--Load the required Kendo Vue package(s)--><scriptsrc="https://unpkg.com/@progress/kendo-scheduler-vue-wrapper/dist/cdn/kendo-scheduler-vue-wrapper.js"></script>

Create the Scheduler Component

Now that we have all the Kendo UI packages we need for our scheduler app, let’s go ahead and modify our Vue app to render the scheduler. To do this, open the src/components/ folder. You should find an existing file HelloWorld.vue. Rename the file to Scheduler.vue and update it with the code below:

<!-- src/components/Scheduler.vue --><template><div class="hello"><h1>{{ msg }}</h1><div id="vueapp"class="vue-app"><div><kendo-scheduler :data-source="localDataSource":date="date":height="600":timezone="'Etc/UTC'"
                     @add="onAdd"
                     @navigate="onNavigate"<kendo-scheduler-view :type="'day'"></kendo-scheduler-view><kendo-scheduler-view :type="'workWeek'":selected="true"></kendo-scheduler-view><kendo-scheduler-view :type="'week'"></kendo-scheduler-view><kendo-scheduler-view :type="'month'"></kendo-scheduler-view><kendo-scheduler-view :type="'agenda'"></kendo-scheduler-view></kendo-scheduler></div></div></div></template><script>exportdefault{
  name:'Scheduler',
    data:function(){return{
        date:newDate('2013/6/6'),
        localDataSource:[{
              id:1,
              start:newDate("2019/2/18 08:00 AM"),
              end:newDate("2019/2/19 09:00 AM"),
              title:"Interview"}]};},
    methods:{
        onAdd:function(ev){
            console.log("Event :: add");},
        onNavigate:function(ev){
            console.log("Event :: navigate");},},
  props:{
    msg: String
  }}</script>

Here, we have rendered the <kendo-scheduler> widget on the application’s template section. The scheduler comes with a lot of events like onChange, onNavigate, onAdd, etc. There are a lot more scheduler events you should totally check out here.

We also rendered the <kendo-scheduler-view> widgets with their respective types to provide the option to render scheduled events in different views – as a single day, a whole week, or month, or as a list of tasks which needs to be accomplished.

Next, we predefined a task in the localDataSource array to render it on the scheduler when we run our app. We have also set up two events on our Vue methods object to define the events on the scheduler widget.

Modify App Component

Next, let’s import this component in the App.vue file and render it to the screen. Open the App.vue file and update it with the code below:

<!-- src/App.vue --><template><div id="app"><img alt="Vue logo" src="./assets/logo.png"><Scheduler msg="Welcome to your task scheduler"/></div></template><script>import Scheduler from'./components/Scheduler.vue'exportdefault{
  name:'app',
  components:{
    Scheduler
  }}</script><style>
#app {
  font-family:'Avenir', Helvetica, Arial, sans-serif;-webkit-font-smoothing: antialiased;-moz-osx-font-smoothing: grayscale;
  text-align: center;
  color: #2c3e50;
  margin-top: 60px;}</style>

Finally, we import the SchedulerInstaller in our main.js file. Then add it to our Vue instance to make it available everywhere in our app. Open the main.js file and update it with the code below:

<!-- src/main.js -->import Vue from'vue'import App from'./App.vue'import{ SchedulerInstaller }from'@progress/kendo-scheduler-vue-wrapper'
Vue.use(SchedulerInstaller)
Vue.config.productionTip =falsenewVue({
  render: h =>h(App),}).$mount('#app')

At this point, if you save the changes and check back on the browser, you should see the scheduler rendered like so:

scheduler

 

Great, we have our task scheduler working exactly as expected! We can see how the predefined task has been rendered on our scheduler and we can view it in the details on the Agenda tab.

Add a New Task

What if we wanted to add a new custom task to our scheduler – how do we go about it? Well, it’s straightforward. We open the Scheduler component and update our localDataSource array like so:

...{
  id:2,
  start:newDate("2019/2/22 1:00 PM"),
  end:newDate("2019/2/22 2:00 PM"),
  title:"Conference"},

Here, we are creating another conference task on the 22nd of Feb, 2019. This conference will happen between 1 pm - 2 pm according to our schedule; however, it’ll be rendered 1hr early for us. If you save this change and reload the browser, you should see that our new task has been scheduled on our scheduler:

new-task-scheduled

Conclusion

In this post, we have demonstrated how to build your own task scheduler in Vue.js using Kendo UI Scheduler component. It is very simple and straightforward to implement. Feel free to learn more about this component on the official documentation page.


This post has been brought to you by Kendo UI

Want to learn more about creating great web apps? It all starts out with Kendo UI - the complete UI component library that allows you to quickly build high-quality, responsive apps. It includes everything you need, from grids and charts to dropdowns and gauges.

KendoJSft

Build a Countries List with Telerik UI for WinForms DomainUpDown

$
0
0

In this blog post, you will learn more about the DomainUpDown control in Telerik UI for WinForms and how to use it to build a selection list for countries.

RadDomainUpDown in Telerik UI for WinForms is a combination of a text-box and a pair of moving up and down buttons to navigate through a limited selection of options. This control may save you some screen space since it occupies the space needed for a standard text-box. However, in addition, it allows the end user to select one of a variety of several items.

A common use-case is to build an input form for filling personal information. One of the required fields is the nationality. RadDomainUpDown is suitable for introducing the countries options if you don’t want to allocate  a lot of space on the form.

DomainUpDown_Animated

Adding Countries to the DomainUpDown Control

You can add the country items either at design time or at run time.

Adding Items at Design Time

The RadListDataItem Collection Editor allows you to do that. You can access it through the Smart tag >> Edit Items option:

DUD_01

Adding Items at Run Time

For each country option, add a RadListDataItem to the Items collection that RadDomainUpDown offers:

RadListDataItem item1 = newRadListDataItem("Bulgaria");
RadListDataItem item2 = newRadListDataItem("France");
RadListDataItem item3 = newRadListDataItem("Italy");
this.radDomainUpDown1.Items.AddRange(newList<RadListDataItem>()
 
{
    item1,
    item2,
    item3
});

Adding Flags to the Countries

Open the project’s Resources and add the flags for the countries that you have added:

DUD_02

DUD_03

Adding Country Flags at Design Time

Open the RadListDataItem Collection Editor again and assign an image to each RadListDataItem:

DUD_04

Adding Country Flags at Run Time

Set the Image property for each RadListDataItem:

item1.Image = Properties.Resources.BUL;
item2.Image = Properties.Resources.FR;
item3.Image = Properties.Resources.ITA;

The last thing to do is to set the ReadOnly property to true. Thus, the item’s image will be shown next to the text after making a selection:

DUD_05

Wrapping Items

Set the Wrap property to true if you need the selected item to revert to the first item after reaching the last item and vice versa.

DomainUpDownWrap 

Data Validating

The SelectedIndexChanging event allows you to control whether the newly selected item is valid according to the other fields’ input, e.g. selected town. If the selection is not valid simply set the Cancel argument to true:

privatevoidradDomainUpDown1_SelectedIndexChanging(objectsender,
    Telerik.WinControls.UI.Data.PositionChangingCancelEventArgs e)
{
    if(e.Position>-1 && this.radDomainUpDown1.Items[e.Position].Text=="Italy")
    {
        e.Cancel = true;
    }
}

Try It Out and Share Your Feedback

You can learn more about the Telerik UI for WinForms suite via the product page. It comes with a 30-day free trial, giving you some time to explore the toolkit and consider using it for your current or upcoming WinForms development.

Start My Trial

We would love to hear what you think, so should you have any questions and/or comments, please share them to our Feedback Portal.

Using WebAssembly with React

$
0
0

WebAssembly is one of the newest technologies to hit the web dev world with some promising new features around performance. This is a look into how we could slowly integrate the new technology into an existing React app.

WebAssembly is one of the newest technologies in web development. It allows you to execute code built in other languages — a feature you can take advantage of without a major rewrite, since we can incorporate it with existing code bases. Since the easiest way to gain adoption of new technology is to slowly weave it into an existing code base, we are going to be taking a React app that is built with create-react-app and adding WebAssembly libraries that were built in Rust. It's pretty common to have more than one team working on a React app (frontend + backend), and I can't think of a cooler experience than sharing code without sharing a language.

The source code for this article can be found on GitHub: react-wasm-migration and react-wasm-rust-library.

Initial Working React App

I started with creating a React app using the boilerplate.

npx create-react-app react-wasm-migration

Out of the box, create-react-app will not support WebAssembly. We have to make some changes to the underlying webpack config that powers the app. Unfortunately, create-react-app doesn't expose the webpack config file. So, we'll need to pull in some dev dependencies to help out. react-app-rewired is going to allow us to modify the webpack without ejecting, and wasm-load will help webpack handle WebAssembly.

Yarn:

yarn add react-app-rewired wasm-loader --dev

npm:

npm install react-app-rewired wasm-loader -D

Once this is done, you should have a fully functioning app, and we can jump into making some tweaks to our webpack.

Modify Webpack to Support WebAssembly

We need to add a config-overrides.js file to the root of our app. This file will allow us to make changes to our webpack file without rewriting it.

const path = require('path');

module.exports = function override(config, env) {
  const wasmExtensionRegExp = /\.wasm$/;

  config.resolve.extensions.push('.wasm');

  config.module.rules.forEach(rule => {
    (rule.oneOf || []).forEach(oneOf => {
      if (oneOf.loader && oneOf.loader.indexOf('file-loader') >= 0) {
        // make file-loader ignore WASM files
        oneOf.exclude.push(wasmExtensionRegExp);
      }
    });
  });

  // add a dedicated loader for WASM
  config.module.rules.push({
    test: wasmExtensionRegExp,
    include: path.resolve(__dirname, 'src'),
    use: [{ loader: require.resolve('wasm-loader'), options: {} }]
  });

  return config;
};

Credit for the above file goes to the folks over in Wasm Loader GitHub Issues, who were working towards the same goal of getting WebAssembly into a create-react-app.

At this point, if you run yarn start, you will not be using the webpack config changes, since we need to modify the package scripts. We need to make some changes to package.json in order to take advantage of the changes we just made.

Old:

"scripts": {
  "start": "react-scripts start",
  "build": "react-scripts build",
  "test": "react-scripts test"
}

New:

"scripts": {
  "start": "react-app-rewired start",
  "build": "react-app-rewired build",
  "test": "react-app-rewired test"
}

If you run yarn start, you should see the same initial page for a create-react-app. After each step, you should have a working application.

Including WebAssembly

There are several guides on creating WebAssembly in your language of choice, so we are going to gloss over such creation in this post. I've attached a link to the repo that I used to create the .wasm file that we are going to be using for this application. You can check it out along with some details on how I created it at react-wasm-rust-library.

At this point, our React app can support WebAssembly — we just need to include it within the app. I've copied my WebAssembly package into a new folder called “external” at the root level.

For the WebAssembly, we have added hello, add, and sub functions. Hello takes a string and returns Hello, <string>. Add will take two numbers and return their sum. Sub will take two numbers and return their difference.

Next up, we need to add our Wasm to our package.json and install it using yarn install --force or npm install.

dependencies: {
  "external": "file:./external"
}

This is not standard — we are actually skipping the step where we publish the WebAssembly package to npm and install it like any other node dependency. For production, you would want to publish your WebAssembly package to a private or public npm and install it using Yarn or npm.

Connecting All the Dots

We have everything in place to support WebAssembly; Webpack has been modified to support WebAssembly and we have included our WebAssembly package into our app. The last step is to start using the code.

WebAssembly must be loaded asynchronous, so we must include it using the import statement in App.js:

componentDidMount() {
  this.loadWasm();
}

loadWasm = async () => {
  try {
    const wasm = await import('external');
    this.setState({wasm});
  } catch(err) {
    console.error(`Unexpected error in loadWasm. [Message: ${err.message}]`);
  }
};

This will give us access to the WebAssembly as this.state.wasm. Next, we need to utilize our library.

render() {
  const { wasm = {} } = this.state;
  return (
    <div className="App">
      <header className="App-header">
        <img src={logo} className="App-logo" alt="logo" />
        <p>Edit <code>src/App.js</code> and save to reload.</p>
        <a className="App-link" href="https://reactjs.org" target="_blank" rel="noopener noreferrer">Learn React</a>
      <div>
        <div>Name: <input type='text' onChange={(e) => this.setState({name: e.target.value})} /></div>
        <div>{ wasm.hello && wasm.hello(this.state.name) } </div>
      </div>
      <div>
        <div>
          Add:
          <input type='text' onChange={(e) => this.setState({addNum1: e.target.value})} />
          <input type='text' onChange={(e) => this.setState({addNum2: e.target.value})} />
        </div>
        <div>
          Result:
          { wasm.add && wasm.add(this.state.addNum1 || 0, this.state.addNum2 || 0) }
        </div>
      </div>
      <div>
        <div>
          Sub:
          <input type='text' onChange={(e) => this.setState({subNum1: e.target.value})} />
          <input type='text' onChange={(e) => this.setState({subNum2: e.target.value})} />
        </div>
        <div>
          Result:
          { wasm.sub && wasm.sub(this.state.subNum1 || 0, this.state.subNum2 || 0) }
        </div>
      </div>
    </header>
  </div>
);

At this point, you can yarn start and start interacting with your WebAssembly.

Image of Working App with WebAssembly

Things to Watch for Along the Way

You can see how this can be pretty game-changing in places where you have teams working in different languages but need to collaborate on common deliverables, since you can share code instead of contracts. There are definitely some anti-patterns to watch out for as you begin your WebAssembly journey, though.

You will want to keep your libraries small since they cannot be bundled with the rest of your code. If you find that you are creating a massive WebAssembly, it may be time to break it up into smaller pieces.

You shouldn't WebAssembly-ify everything. If you know that the code is frontend only and there is no reason to share it, it may be easier to write it in JavaScript and maintain until you can verify that WebAssembly will make it perform faster.

Hopefully you feel that adding WebAssembly into your React project is well within reach after reading over this article.

The Journey of JavaScript: from Downloading Scripts to Execution - Part II

$
0
0

In this article, you’ll learn how JavaScript engines have evolved from a mere interpreter to a performant and efficient engine that produces highly optimized machine code. You’ll also learn about the underlying components in the V8 JavaScript engine, including how the interpreter produces bytecode with abstract syntax tree as the input and how the compiler uses this bytecode to generate optimized machine code. This article will also help in understanding some of the performance optimization techniques for objects and arrays.

This article is a part of the series on The Journey of JavaScript - from Downloading Scripts to Execution

Highlights from Part I of the Series

  1. We learned about the different ways of downloading scripts based on the use case. Scripts are downloaded synchronously and are blocking in nature. When the main thread comes across a script tag, it blocks the parsing of HTML DOM until the entire script is downloaded, parsed and executed. However, scripts can also be downloaded asynchronously, without blocking the main thread, with the use of async keyword in the script tag. If any of these scripts are not required on load, we can defer their execution until the DOM is ready by using defer keyword.
  2. JavaScript engines are built of parser, interpreter and compiler. The JavaScript source code is broken into tokens, which are fed to the parser. The parser generates an abstract syntax tree (AST) and Scopes based on these tokens.
  3. The JavaScript engines do not parse all of the source code on load. We saw the heuristics employed by the V8 engine for parsing JavaScript code.

You can read more on the above points here.

Overview of JavaScript Engines

The semantics of JavaScript is defined by the ECMAScript specifications, and these specifications are written by the TC39 committee. The JavaScript engines are required to follow these specifications while implementing different functionalities in JavaScript. Most of the major browsers have their own implementation of these engines, but they have the same end goal of meeting the semantics laid down by the TC39 committee. So, most of the performance-related techniques are applicable in almost all browsers.

Let’s look at the list of JavaScript engines in some of the major browsers.

BrowserJavaScript Engine
Microsoft EdgeChakra
FirefoxSpiderMonkey
SafariJavaScriptCore
Google ChromeV8

In this article, we will dig deeper into the internals of the V8 JavaScript engine.

High-Level Architecture of V8

alt High-level Architecture of V8

As we can see from the above image, the JavaScript source code is fed to the parser. The parser generates an AST. Ignition generates bytecode, and TurboFan produces optimized machine code. Don’t worry about the red and green arrows as of now. They will make sense once we get to the working of Ignition and TurboFan.

How Are JavaScript Engines Different from the Engines of Other Programming Languages?

The high-level languages like C++ and Java take two steps to convert source code to machine code. The compiler first converts the source code to an intermediate code and then the interpreter takes this intermediate code and converts it to machine code. For example, a Java file is compiled as javac filename.java. This command generates a bytecode and stores it in filename.class file. This bytecode can run on any machine that has Java Virtual Machine/Java Interpreter. It can be executed using the command java filename.class. The initial compilation step does the heavy work, and, hence, the server can execute the Java bytecode at a faster speed.

Unfortunately, the above strategy is not implemented in JavaScript engines. The JavaScript engines do not follow a two-step procedure for executing the source code. We never do compile filename.js. We directly run the JavaScript file in the browsers or on run-time with Node.js. The engines do not compile the entire source code all at once. Instead, JavaScript engines interpret the source code line by line and execute that line simultaneously.

Don’t you think this would make the execution of JavaScript a lot slower as compared to other high-level languages? JavaScript runs source code, whereas other high-level languages that we saw earlier run the optimized bytecode, which was generated in the previous step of compilation.

Let’s Look at Some Statistics

Here are the performance results with hardware usage benchmarks test. It is taken from this IBM article.

alt Comparison of nodejs with Java

Node.js uses the V8 JavaScript engine as the run-time. In the above image, we can see that Node.js performs better than Java with respect to the CPU usage, while its performance is on par with Java with respect to the usage in memory.

Here is the snapshot of most popular languages from the Stack Overflow survey of 2018.

alt Stack Overflow Survey 2018

As per the above image, JavaScript is the most popular language among professional developers.

Even though JavaScript engines skip the compilation step, its performance and popularity are better than other languages. Let’s see how JavaScript engines make this possible.

Evolution of JavaScript Engines

The first JavaScript engine was a mere interpreter. An interpreter is a software program that executes the source code line by line.

Let’s consider a JavaScript snippet below to understand how former engines used to operate.

functionarrSum(arr){var sum =0for(var i =0; i < arr.length; i++){
        sum += arr[i]}}

It has a simple arrSum function that adds the elements of an array arr. Inside for loop, there is only one statement that does the work of adding array elements to the variable sum.

Consider this from the point of the interpreter —

It is possible in JavaScript that an array can have numbers as well as strings at the same time. arr might have a mix of different data-types. On every iteration of the for loop, the interpreter checks the type of element in the array and accordingly performs the add/concatenation operation. + behaves as an addition operator for numbers and as concatenation operator for strings.

This type checking and computation on every iteration makes it slow. In earlier days, JavaScript was not considered as a language of choice because it was very slow compared to other high-level languages.

However, as we have seen from the statistics above, JavaScript performs better now and is a much-loved language among professional developers.

Chrome developed the first modern JavaScript engine, V8, in 2008. Its performance was much better than any prior engine. Chrome used just-in-time compilation in the V8 engine to boost its performance. Most of the browsers/JavaScript run-time systems now use the same technique to power up the execution speed of JavaScript code.

What is Just-in-Time Compilation (JIT)?

If the above code of adding elements of an array was compiled first, it would have taken some time to start up, but its execution would have been faster as compared to the earlier technique. JIT takes in the good parts of both the compiler and the interpreter. It interprets the source code line by line, produces bytecode for that line, and gives it to the compiler, which uses the profiling information to produce speculative optimizations. It compiles code during execution at run time.

Browsers started shipping in JavaScript engines with JIT and a profiler. A profiler, or a monitor, watches the code that runs and makes a note of the number of times a particular code snippet runs. If the same lines of code are run for more than some threshold value, that code is called as hot. The profiler then sends this hot code to the optimizing compiler.

The compilation details of the hot code are saved. If the profiler encounters this hot code again, it would convert it into its existing optimized version. This helps in improving the performance of the execution of JavaScript code. We’ll get to the assumptions that the compiler makes to generate optimized code later in the article.

How JIT Works in V8

alt Interpreter and Compiler in V8 JavaScript Engine

Ignition (interpreter) takes in AST and goes through its nodes one by one and accordingly produces the bytecodes. The profiler keeps an eye on the frequency at which a particular code snippet runs. If the frequency crosses some threshold value, it sends the hot code along with some profiling information to the TurboFan (compiler). TurboFan makes some assumptions to optimize this code further, and, if these assumptions hold true, it generates an optimized version for the hot code. The green arrow that signifies optimization success!

If the assumptions are not correct, it falls back to the bytecode that was generated by the Ignition. This is called a deoptimization or optimization bailout. The red arrow signifies de-optimization!

Let’s break JIT into parts and understand the working of each of the components in detail.

How Ignition Generates Bytecode

Bytecodes are considered small building blocks that can be composed together to build a JavaScript functionality. They abstract the low-level details of machine code. V8 has hundreds of bytecodes for different functionalities. For example, it uses the Add bytecode for addition operator and CreateObjectLiteral bytecode for creating an object.

Ignition uses a register machine to hold the local state of the registers. It has a special register called an accumulator that store the previously computed value.

Let’s consider a simple JavaScript snippet.

functionadd(x, y){return x + y;}add(1,2);

Here’s the bytecode generated for the above code:

alt ByteCode of the add function generated by Ignition

Focus only on the right-hand part. The registers a0 and a1 hold the value of the formal parameters. Add is a bytecode that is used for adding the values in registers a0 and a1.

Before jumping onto the working of TurboFan, we have to understand two important concepts in V8:

  1. Implementation of the object model
  2. Implementation of arrays

Implementation of the Object Model

The ECMAScript specification defines objects like dictionaries with string keys that map to values. In this section, we’ll learn how JavaScript engines store objects and how they implement property access on these objects.

let pokemonObj ={
    id:12,
    name:'Butterfree',
    height:11,
    weight:22}

Below is the AST for the object pokemonObj.

{"type":"Program","start":0,"end":87,"body":[{"type":"VariableDeclaration","start":0,"end":87,"declarations":[{"type":"VariableDeclarator","start":4,"end":87,"id":{"type":"Identifier","start":4,"end":14,"name":"pokemonObj"},"init":{"type":"ObjectExpression","start":17,"end":87,"properties":[{"type":"Property","start":23,"end":29,"method":false,"shorthand":false,"computed":false,"key":{"type":"Identifier","start":23,"end":25,"name":"id"},"value":{"type":"Literal","start":27,"end":29,"value":12,"raw":"12"},"kind":"init"},{"type":"Property","start":35,"end":53,"method":false,"shorthand":false,"computed":false,"key":{"type":"Identifier","start":35,"end":39,"name":"name"},"value":{"type":"Literal","start":41,"end":53,"value":"Butterfree","raw":"'Butterfree'"},"kind":"init"},{"type":"Property","start":59,"end":69,"method":false,"shorthand":false,"computed":false,"key":{"type":"Identifier","start":59,"end":65,"name":"height"},"value":{"type":"Literal","start":67,"end":69,"value":11,"raw":"11"},"kind":"init"},{"type":"Property","start":75,"end":85,"method":false,"shorthand":false,"computed":false,"key":{"type":"Identifier","start":75,"end":81,"name":"weight"},"value":{"type":"Literal","start":83,"end":85,"value":22,"raw":"22"},"kind":"init"}]}}],"kind":"let"}],"sourceType":"module"}

Notice the use of properties array that holds the key-value pairs of the object. As per the ECMAScript specification, the keys of an object are to be mapped to their respective property attributes. The property attributes tell more about the configuration of that key in the object.

Here is the list of the property attributes:

AttributeDescriptionDefault Value
valueThe value associated with the property. Can be any valid JavaScript value (number, object, function, etc.).undefined
enumerabletrue if and only if this property shows up during enumeration of the properties on the corresponding object.false
writabletrue if and only if the value associated with the property may be changed with an assignment operator.false
configurabletrue if and only if the type of this property descriptor may be changed and if the property may be deleted from the corresponding object.false

Let’s say a program has a hundred instances of the pokemonObj object, which is likely the case in common scenarios. That would mean the engine is supposed to create a hundred objects with all of the four keys of pokemonObjid, name, height and weight— along with their property attributes. That’s such a waste of memory to store repeated instances of the metadata of an object. The JavaScript engines tackle this problem by storing a shared copy of the metadata of an object. So, for hundreds of pokemonObj instances, it creates just one object to store the common metadata.

alt Multiple Objects sharing same metadata

We can optimize this more. Each of these objects holds the same four keys — id, name, height and weight. They all have the same shape of pokemonObj. We can create one shape with these four keys and let all the instances point to that shape. This will become clear from the diagram below.

alt JavaScript Objects share same shape

As it can be seen from the image above, all of the different Pokemon refer to the same shape (id, name, height, weight), and each of the keys in this shape structure refers to its respective property attribute’s structure. That’s a significant saving on the memory side!

There is an additional property offset that is included with the list of property attributes. Please note id has an offset of 0; name has an offset of 1; and so on. If we follow the pattern, it can be inferred that the keys in pokemonObj are assigned a sequential index as per their position in the properties array defined on the AST.

Let’s understand the need for the offset property.

For accessing any property on the pokemonObj, we use the . (dot) operator as pokemonObj.property. Let’s say we want the values of name, height and abilities of a Pokemon.

let name = pokemonObj.name // Butterfreelet height = pokemonObj.height // 11let abilities = pokemonObj.abilities // undefined

When the JavaScript engine encounters something like pokemonObj.name, it searches the name property defined on the shape of the pokemonObj. If it is found, it returns the value. Since abilities is not defined on the shape of pokemonObj, the engine searches for the abilities property on the prototype object of pokemonObj. It keeps traversing down the chain until it either finds abilities key or reaches to the end of the chain. In our case, it returned undefined, which means abilities was not defined on any of the ancestors of pokemonObj. Property accesses are hard in JavaScript. It is an expensive operation for an engine to traverse through the entire prototype chain for a single property.

Property access for keys name and height is very simple because of the offset property. The JavaScript engine simply returns the value from the object using the offset as defined on a particular property. For name, it would return the first value from pokemonObj, and for height it will return its second value.

Let’s say, we’ve got one more Pokemon called weedle joining in our gang. This one has some unique abilities and we really want to store all of its abilities in our pokemonObj object.

let weedle ={
    id:13,
    name:'weedle',
    height:3,
    weight:32,
    abilities:'run-away'}

We already have defined the shape for pokemonObj. JavaScript allows adding/deleting properties from an object at run-time. How do we handle the addition of abilities property to our pokemonObj shape? Should we make a new shape altogether for this object, or should we extend the previous shape? You might have guessed it right. We’ll extend our original pokemonObj shape to make room for an additional key called abilities.

alt JavaScript Objects extend Shapes

Extending shapes makes it easy to store objects that differ in some properties from their counterparts.

This can also be viewed as a hierarchical chain that starts from an empty object and eventually builds up the entire object.

let obj ={}
obj.a ='a'
obj.b ='b'
obj.c ='c'

It starts with an empty shape. obj.a = 'a' statement extends the empty shape and adds one property a to it. This goes on until we reach c. Please note, this is not an efficient way of creating objects. In order to create the above object, follow the below method of object declaration.

let obj ={
    a:'a',
    b:'b',
    c:'c'}

Takeaways from the Object Model Implementation

  1. Avoid adding/deleting properties from an object. Strive to maintain a constant structure for objects.
  2. Avoid reordering the keys of an object. The JavaScript engine takes into account the ordering of keys of an object using the offset property.
  3. Property accesses are expensive in JavaScript. The engine goes down to the root to find a particular property. Instead of directly accessing a property as pokemonObj.abilities, use hasOwnProperty method as below.
if(pokemonObj.hasOwnProperty('abilities')){// ... YOUR CODE}

hasOwnProperty method searches only on the object and not on its prototype chain.

Implementation of Arrays

Similar to objects, arrays are also considered dictionaries. But in the case of arrays, the keys are numeric, called as array indices. The JavaScript engines perform special optimizations for properties whose names are purely numeric. Objects have properties that map to values, while arrays have indices that map to elements.

In JavaScript, we don’t define the type of elements that an array will contain beforehand. While running JavaScript code, the engine determines the type of elements of an array and accordingly performs optimizations based on its type.

let arrIntegers =[1,2,3,4]let arrDoubles =[1,2.2,3.3]let arrMix =[1,2.2,'Hello World!']

The arrIntegers holds elements of type integers; arrDoubles contains integers as well as doubles; while arrMix contains integers, doubles, as well as strings.

The engines identify these three types of arrays a:
arrIntegers - SMI_ELEMENTS
arrDoubles - DOUBLE_ELEMENTS
arrMix - ELMENTS

SMI_ELEMENTS is a more specific type that contains only small integers. DOUBLE_ELEMENTS contains floating-point numbers and integer, while ELEMENTS is a general form that holds everything.

There are two more variants to these types defined above.

If an array is dense, it does not contain any undefined values — it is defined as a PACKED variant. However, if an array contains few spaces or holes or undefined values, it is defined as a HOLEY array.

let packedArr =[1,2,3,4]let holeyArr =newArray(4)
holeyArr[0]=1

packedArr has definite values on all its indices. In the second statement, the Array constructor creates an empty array of length 4. It has four undefined values. When we do holeyArr[0] = 1, it still contains three undefined values. Even if we populate all of the indices of holeyArr with definite values, it will still be called as HOLEY. Arrays cannot transition from a more generic type to its specific variant at any point in time. PACKED is a more specific variant as compared to HOLEY.

The JavaScript engine will categorize packedArr as PACKED_SMI_ELEMENTS because it contains all integers and it does not contain any undefined values. V8 can perform better optimizations for elements that are of more specific kinds. Operations on PACKED arrays are more efficient than those on HOLEY arrays.

V8 currently distinguishes 21 different elements kinds, each of which comes with its own set of possible optimizations.

alt Elements Kinds

It is only possible to transition down the lattice. An array cannot go back to its more specific version at any point.

Avoid Creating Holes in an Array

If you initialize an array using an array constructor as below —

let arr =newArray(3)

— it creates holes in the array. The engine identifies this array as of type HOLEY_ELEMENTS. Property accesses on HOLEY_ELEMENTS is expensive. If you want the value at arr[0], the engine first searches for the 0 property on array. Array arr does contain any of the properties yet. Since the engine cannot find the key 0 on arr, it descends to its prototype chain until it finds key 0 or has reached the end of the chain. This makes it similar to objects, and so even after using arrays, we’re not able to optimize it.

Instead, create arrays using an empty literal and then keep pushing values to it.

let arr =[]
arr.push(1)
arr.push(2)

With this approach, the number of elements present in the array is the same as that of its length.

For the same reason, access only those properties on the array that are defined within its length. If you access a property that is not defined on an array, it will enter into expensive lookups on the prototype chain.

V8 performs several optimizations for array built-in methods like map, reduce and filter.

Speculative Optimizations by the Compiler

If a certain code snippet has been executed some x number of times, the profiler passes it to the optimizing compiler. TurboFan uses the profiling information and makes some assumptions to produce optimized machine code.

Some of its assumptions are:

  1. The type of a variable is not changed.
  2. The structure of the object has not changed, and, hence, it can directly use the offset property defined on the object.

Inline Caches

Inline caches are the key ingredients in making JavaScript run fast. JavaScript engines use inline caches to memorize information on where to find a particular property of an object. The engines store the offset values and the bytecodes that are repeated too often in inline caches.

Deoptimization or Optimization Bailout

If the assumptions made by the compiler are not true, work done by the compiler is ignored and the engine falls back to the bytecode generated by the Ignition. This is called as deoptimization or optimization bailout. This happens because of one of the following reasons:

  1. The type of element has been changed at run time
let a =2
a ='Hello World!'
  1. The structure of an object has been changed.
  2. Element kinds of an array has been changed from PACKED to HOLEY version. Please note, the engines use a lot of optimization techniques for PACKED arrays. Avoid holes or undefined in your arrays.

Let’s Recap All That We Have Learned in this Tutorial

  1. JavaScript engines evolved from a mere interpreter to modern engines that use just-in-time-compilation. Just-in-time compilation combines the good parts of both the compiler and an interpreter. It compiles JavaScript code at run-time.
  2. V8 uses Ignition as the interpreter and TurboFan as the optimizing compiler. Ignition produces bytecode using AST as the input. The profiler watches code as it runs. It sends off the hot code to TurboFan. TurboFan uses profiling information to generate optimized machine code.
  3. We learned that interpreter has a stack of registers and it uses these registers to store parameters/variables. V8 has defined bytecodes for various operations like adding and creating objects or functions.
  4. We learned how JavaScript engines handle different Elements Kinds.
  5. Objects are stored in the form of shapes, which saves a lot of memory and also makes it easy to fetch any of the properties on an object.
  6. We saw some of the assumptions that a compiler makes to produce optimized code. If any of the assumptions fail, it deoptimizes its code and falls back to the bytecode that was generated by Ignition.

References

  1. https://www.youtube.com/watch?v=p-iiEDtpy6I
  2. https://www.youtube.com/watch?v=5nmpokoRaZI
  3. https://mathiasbynens.be/notes/shapes-ics
  4. https://v8.dev/blog/elements-kinds

This post has been brought to you by Kendo UI

Want to learn more about creating great web apps? It all starts out with Kendo UI - the complete UI component library that allows you to quickly build high-quality, responsive apps. It includes everything you need, from grids and charts to dropdowns and gauges.

KendoJSft


Header and Footer in Telerik UI for Xamarin ListView

$
0
0

The Telerik UI for Xamarin ListView control now features Header and Footer support, giving you new opportunities to customize the experience for your users.

With Telerik UI for Xamarin, you can now add custom content to the top and bottom of your list view using the HeaderTemplate and the FooterTemplate properties of the ListView. You can use them in your mobile app to display descriptive information about the content (list view items). In this blog post I will show you how both templates can be added to the RadListView control

ListView Header and Footer

HeaderTemplate

The HeaderTemplate can be used for adding custom content to the top of the items container. It could be used in different cases, for example to add text which gives an overview for the list view content. The snippet below shows how to add the HeaderTemplate:

<telerikDataControls:RadListView.HeaderTemplate>
    <DataTemplate>
        <LabelText="All mail"/>
    </DataTemplate>
</telerikDataControls:RadListView.HeaderTemplate>

FooterTemplate

The RadListView FooterTemplate is displayed at the bottom of the list view items. Here's a sample use-case scenario: say the items in the list view are messages and you want to execute some operation to all/some of them - in the footer template you can add a button that, when clicked, deletes all/the selected messages, or marks them "as read." Here is how a sample Footer Template could be defined:

<telerikDataControls:RadListView.FooterTemplate>
    <DataTemplate>
        <ButtonText="Delete all"Clicked="Button_Clicked"/>
    </DataTemplate>
</telerikDataControls:RadListView.FooterTemplate>

Please note that both templates are scrollable along the items in the ListView. For more details check our help article here.

Using Header and Footer in the Xamarin ListView

Let’s create a sample app using the HeaderTemplate and the FooterTemplate. For the demo we are going to use the RadDockLayout control. RadDockLayout provides a mechanism for child elements to be docked to the left, right, top or the bottom edge of the screen or to occupy the center area of the layout. You can easily arrange the views in order to achieve a fully featured layout of a page.

Let's create a view model and a business object that will be the source of the list view: 

publicclassViewModel
{
    publicViewModel()
    {
        this.Source = newObservableCollection<News>()
        {
            newNews("As a Front-End Developer, you will be responsible for the look, feel and navigation of complex enterprise ecommerce solutions."),
            newNews("We’re looking for a UX Architect for our UX/UI team who tells cohesive, well-thought and user tested stories through mind maps, user flows and ultimately, prototypes. "),
            newNews("Define consistent UI style guides and page layouts. "),
            newNews("You will be speaking to customers, using data to inform your designs, and, since we believe in design as a team effort, have an open work process with regular design critiques and peer feedback.")
        };
    }
    publicObservableCollection<News> Source { get; set; }
}
 
publicclassNews
{
    publicNews(stringdescription)
    {
        this.Description = description;
    }
 
    publicstringDescription { get; set; }
}

Finally, lets declare the RadDockLayout and dock the RadListView control to occupy the remaining space of the screen:

<ScrollViewBackgroundColor="White">
    <telerikCommon:RadDockLayout>
        <GridtelerikCommon:RadDockLayout.Dock="Top"
            BackgroundColor="#009688"
            HeightRequest="50">
            <LabelText="Job Descriptions"VerticalTextAlignment="Center"/>
        </Grid>
        <GridtelerikCommon:RadDockLayout.Dock="Bottom"
            HeightRequest="50"
            BackgroundColor="#659BFC">
            <LabelText="Navigation"VerticalTextAlignment="Center"/>
        </Grid>
        <Grid>
            <telerikDataControls:RadListViewItemsSource="{Binding Source}">
                <telerikDataControls:RadListView.BindingContext>
                    <local:ViewModel/>
                </telerikDataControls:RadListView.BindingContext>
                <telerikDataControls:RadListView.HeaderTemplate>
                    <DataTemplate>
                            <telerikPrimitives:RadBorderPadding="5">
                                <StackLayoutOrientation="Horizontal"BackgroundColor="#F7F7F7">
                                    <telerikPrimitives:RadBorderCornerRadius="30"
                                                        HorizontalOptions="Start"
                                                        WidthRequest="60"
                                                        HeightRequest="60"
                                                        Margin="10">
                                        <ImageSource="Avatar.png"Aspect="AspectFill"/>
                                    </telerikPrimitives:RadBorder>
                                    <StackLayoutOrientation="Vertical"Spacing="0"VerticalOptions="Center"HorizontalOptions="StartAndExpand">
                                        <LabelText="Jane"FontAttributes="Bold"TextColor="Black"Margin="0"/>
                                        <LabelText="@jane"TextColor="#919191"Margin="0"/>
                                    </StackLayout>
                                    <telerikInput:RadButtonText="Add news"
                                                    BackgroundColor="Transparent" 
                                                    BorderColor="#007AFF"
                                                    BorderRadius="30"
                                                    BorderWidth="2"
                                                    Margin="5"
                                                    WidthRequest="100"
                                                    HeightRequest="40"
                                                    Padding="12,3,12,3"
                                                    HorizontalOptions="End"
                                                    VerticalOptions="Center"
                                                    TextColor="#007AFF"/>
                                </StackLayout>
                        </telerikPrimitives:RadBorder>
                    </DataTemplate>
                </telerikDataControls:RadListView.HeaderTemplate>
                <telerikDataControls:RadListView.FooterTemplate>
                    <DataTemplate>
                        <telerikPrimitives:RadBorderPadding="5"BackgroundColor="#F7F7F7">
                            <telerikInput:RadButtonText="Share news"
                                            TextColor="White"
                                            WidthRequest="200"
                                            HeightRequest="40"
                                            VerticalOptions="Center"
                                            HorizontalOptions="Center"
                                            CornerRadius="25"
                                            Margin="0, 5, 0, 5"
                                            Padding="15,5,15,5"
                                            BackgroundColor="#007AFF"/>
                        </telerikPrimitives:RadBorder>
                    </DataTemplate>
                </telerikDataControls:RadListView.FooterTemplate>
                <telerikDataControls:RadListView.ItemTemplate>
                    <DataTemplate>
                        <telerikListView:ListViewTemplateCell>
                            <telerikListView:ListViewTemplateCell.View>
                                <StackLayoutOrientation="Vertical"Margin="10, 10, 10, 10"BackgroundColor="White">
                                    <LabelText="{Binding Description}"FontSize="15"Margin="3"TextColor="Black"/>
                                </StackLayout>
                            </telerikListView:ListViewTemplateCell.View>
                        </telerikListView:ListViewTemplateCell>
                    </DataTemplate>
                </telerikDataControls:RadListView.ItemTemplate>
            </telerikDataControls:RadListView>
        </Grid>
    </telerikCommon:RadDockLayout>
</ScrollView>

The image below shows the final result:

Demo App

That's all, now you have your list view with header and footer. 

Tell Us What You Think

Have we caught your interest with the RadListView new features and the RadDockLayout control? We would love to hear what you think about them. If you have any ideas for features to add, do not hesitate to share this information with us on our Telerik UI for Xamarin Feedback portal.

Don’t forget to check out the various demos of the controls in our SDK Sample Browser and the Telerik UI for Xamarin Demos application.

If you have not yet tried the Telerik UI for Xamarin suite, take it out for a spin with a 30-day free trial, offering all the functionalities and controls at your disposal at zero cost.

Start My Trial

Happy coding with our controls!

Creating a Serverless Application with KendoReact

$
0
0

In this article, we walk you through creating a serverless application with the Serverless Framework, deploying it to the cloud, and creating a user interface for it using KendoReact.

Serverless is an execution model that allows cloud providers to dynamically allocate resources at the function level in your code rather than the entire application. This provides for a more cost-effective and flexible way to run your application in the cloud.

Some of the most widely used serverless platforms are Amazon Lambda, Google Cloud Functions, Microsoft Azure Functions and IBM OpenWhisk.

The serverless model is gaining traction due to a few advantages over traditional cloud applications:

  • Low cost and high flexibility: serverless platforms automatically provision the optimal hardware to run your code when triggered
  • Low overhead: serverless providers charge for the amount of time your functions spend running; you pay less if there is less traffic to your application; you also pay less if your functions run fast
  • They shift the focus from cloud infrastructure to application: you are not required to manage or maintain your own servers, making it easier to focus on your application

The Serverless Framework is an open-source project that allows you to abstract the specific serverless provider and write your application the same way on all cloud platforms. The Serverless Framework adds cloud provider portability to the already impressive list of the benefits of the serverless model.

In this article, we walk you through creating a serverless application with the Serverless Framework. This is achievable with just about any serverless platform, including Progress Kinvey, but in this example we're going to deploy it to Amazon Lambda, and then create a user interface for it using KendoReact.

About KendoReact

Progress KendoReact is a library of native UI components created specifically for use in React development. KendoReact has no dependencies, and it provides a set of native UI components optimized for React. As such, we can use KendoReact to simplify and speed up UI development for Serverless applications.

Project Overview

In this article, we create a simple Pomodoro timer, using KendoReact and Serverless. If you're new to the Pomodoro technique, you can read about it here.

We create a simple interface for starting and stopping Pomodoro timers and listing the timers recently completed. The events are tracked in an AWS DynamoDB database. We use KendoReact components in the user interface.

We walk you through the dependency installation, the creation of the backend and the frontend, and the deployment of the project to AWS Lambda.

Dependencies

Backend

First, set up the credentials for your Amazon Web Services (AWS) account. If you don't have one, sign up for one on the AWS website here. Once you have the credentials, set them up with the AWS Command Line Interface (AWS CLI). Instructions on how to do this are here. For setup to be as easy as possible, your AWS account should have Admin credentials. If this is the first time you've used the AWS CLI, configure it according to these instructions.

Next, make sure that you have Node.js installed. As of writing, the latest stable version of Node.js is 10.15.0. Installing Node.js also installs the latest version of npm.

Finally, install the Serverless Framework by following the instructions listed in the article, Getting Started with Serverless.

Frontend

The requirements for the frontend of the project are similar to the backend:

  • Node.js (as of this writing, the latest version is 10.15.0)
  • npm (included with Node.js)
  • create-react-app, which is included with modern versions of Node.js
  • KendoReact, which we'll add later

Creating the Backend for the Serverless Application

Make sure that you saved your AWS credentials correctly. Serverless uses them to access the cloud provider, as detailed in the Dependencies section.

Create your backend structure by using this command:

$ serverless create -t aws-nodejs -p backend

This command produces a backend directory with two files in it, handler.js and serverless.yml:

$ tree
. 
├── backend
│  ├── handler.js
│  └── serverless.yml

handler.js contains the code of our backend. serverless.yml declares all the infrastructure necessary for our backend.

We start by defining two functions — one to fetch the saved Pomodoro entries, and one to create a new Pomodoro timer. Replace the current content in handler.js with the following code:

module.exports.getPomodoros = async (event, context) => {
  // fetch all pomodoros from DynamoDB table
  const pomodoros = await documentClient
    .scan({ TableName: "pomodoros" })
    .promise();  

  return response(JSON.stringify({ pomodoros }));
};

module.exports.postPomodoro = async (event, context) => {
  const Item = JSON.parse(event.body);
  await documentClient.put({
    TableName: "pomodoros",
    Item
  })
  .promise();

  return response(JSON.stringify({ Item }));
};

Both functions access the pomodoros table via the documentClient object. This is a mapping that the AWS DynamoDB JavaScript library conveniently provides. We declare the mapping it in the same file above the functions:

const AWS = require("aws-sdk");
const documentClient = new AWS.DynamoDB.DocumentClient();

With that, we are able to access the DynamoDB tables. We also define the response function with the CORS headers needed for the backend and the frontend to work together:

const response = body => ({  
  // return the CORS headers in the response, without that it
  // wouldn't work from the browser
  headers: {  
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Credentials": true
  },
  statusCode: 200,  
  body
});

This completes the handler.js file. Next, we expose both of our handler functions to the outside world via the serverless.yml file. We add the function definitions first, overwriting anything you have in the functions section:

functions:  
  getPomodoros:  
    handler: handler.getPomodoros  
    events:  
      - http:
        path: /
        method: GET
        cors: true
  postPomodoro:
    handler: handler.postPomodoro
    events:
      - http:
        path: /add
        method: POST
        cors: true

Second, we define the DynamoDB database:

resources:
  Resources:
    # DynamoDB Table for pomodoro entries
    PomodorosTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: pomodoros
        AttributeDefinitions:
          - AttributeName: name
            AttributeType: S
        KeySchema:
          - AttributeName: name
            KeyType: HASH
        ProvisionedThroughput:
          ReadCapacityUnits: 2
          WriteCapacityUnits: 2

Finally, we define a location where to persist stack.json— this is how the frontend later knows where to look for our backend application:

plugins:
  - serverless-stack-output

custom:
  output:
    # Save endpoint URLs to stack.json inside frontend source
    # directory
    file: ../frontend/src/stack.json

That's it! Now we can install all the dependencies and deploy our Serverless backend to Amazon Lambda. First, install the plugin we declared above:

$ serverless plugin install --name serverless-stack-output

then

$ npm install

And deploy:

$ npm run deploy # or serverless deploy

And after a few minutes:

$ npm run deploy

> serverless deploy

Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
.....
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (3.53 KB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
......................................................
Serverless: Stack update finished...
Service Information
service: serverless-kendo-pomodoro
stage: dev
region: us-east-1
stack: serverless-kendo-pomodoro-dev
api keys:
  None
endpoints:  
  GET - https://pyjsahfuk7.execute-api.us-east-1.amazonaws.com/dev/
  POST - https://pyjsahfuk7.execute-api.us-east-1.amazonaws.com/dev/add
functions:
  getPomodoros: serverless-kendo-pomodoro-dev-getPomodoros
  postPomodoro: serverless-kendo-pomodoro-dev-postPomodoro
Serverless: Stack Output saved to file: ../frontend/src/stack.json

Our backend is deployed to AWS! We're ready for the next step.

Cleaning Up

The Serverless Framework creates AWS resources for you. Once you've finished setting up the Serverless application and working with its frontend, remember to remove all the resources created by running $ serverless remove in the backend directory to avoid unexpected AWS charges for your account.

Creating the Frontend for the Serverless Application

The easiest way to create a structure for the frontend is to use the create-react-app utility. Run this command:

$ npx create-react-app frontend

The frontend consists of two components:

  • The main one is <App />. This is all the logic for communicating with the backend via HTTP requests and rendering the data fetched from the backend.
  • <Timer /> is used to measure the time.

For the App component, we use the Grid and GridColumn components from KendoReact. First install and save the packages:

$ npm install --save @progress/kendo-react-grid \
                     @progress/kendo-data-query \
                     @progress/kendo-react-inputs \
                     @progress/kendo-react-intl \
                     @progress/kendo-react-dropdowns \
                     @progress/kendo-react-dateinputs

Add it to the import section of App.js:

import { Grid, GridColumn } from "@progress/kendo-react-grid";

And replace the current <div className="App"> with the following:

<div className="App">
  <h1 className="App-title">Serverless KendoReact Pomodoros</h1>
  <Timer onFinish={this.onFinish} />
  <Grid data={**this**.state.data} classNames="grid">
    <GridColumn field="PomodoroName" title="Pomodoros Done" />
    <GridColumn field="Date" />
    <GridColumn field="Elapsed" />
  </Grid>
</div>

Here, we use a simple table to show the Pomodoro timers that we have already completed, plus reference a Timer component that has all the logic for measuring the time spent in the Pomodoros and between them.

The Timer component uses the RadialGauge, Input, and Button KendoReact components, and you can see its entire logic here.

The frontend uses stack.json to determine the details of the endpoint it is connecting to. This file is generated during the deploy of the backend. It is important to deploy the backend before running the frontend.

Once the backend is deployed, we parse the backend endpoint in App.js:

import { ServiceEndpoint } from "./stack.json";

The codebase for our frontend is small now that KendoReact implements all the components. We can focus on defining the business logic and presenting the data clearly.

We won't cover all the frontend code in this article, as there's a lot of boilerplate provided from Create React App. You can find the complete frontend codebase here. Clone that repo before continuing.

Running the Frontend

Once the frontend is ready, and after the backend is deployed, we can run the frontend locally by running the following commands in the frontend directory:

$ npm install

Then:

$ npm start

After that, the frontend is accessible at localhost:3000 in your browser. Try adding a few pomodoros:

Pomodoros example

Notice the smooth transitions in the fields provided by KendoReact with no extra code on our side:

Seamless transitions

That's it! We're ready for some productive time with our Pomodoro timer.

Conclusion

As we've seen, it's easy to get KendoReact and Serverless to work together. Configuring a React application to use a Serverless backend only requires a serverless.yml and a stack.json file. There is a lot you can do with a Serverless backend.

KendoReact provides convenient components to use in many situations. We've used grids, the buttons and the text fields in this article, but there are many more — including tools for animation, conversational UIs, PDF processing, and so on.

Serverless is a great way to create simple and scalable APIs and automate the deployment of the infrastructure required for those APIs. Learn more about the Serverless Framework here. If you want to learn about how the Serverless deployment process works on AWS, go here. As I mentioned earlier, while we happened to use AWS in this example, you could also have used a platform like Kinvey, which you can learn more about here.

Learn more about the KendoReact components here. Documentation on specific components included in KendoReact is here. And here is how to install KendoReact.

How did your setup go with Serverless and KendoReact? Let us know in the comments!

The Uno Platform And WebAssembly

$
0
0

On this episode of Eat Sleep Code, we talk about using Uno to create cross platform .NET apps, and what WebAssembly means for deploying .NET to the web.

Join us as we talk with Jérôme Laban, Uno Platform CTO, about the prospects of using Uno to create cross platform .NET applications. Jérôme also discusses the use of WebAssembly by Uno and what WebAssembly means for deploying .NET to the web.

You can listen to the entire show and catch past episodes on SoundCloud. Or just click below.

Jérôme Laban

jerome.laban

Jerome Laban has been programming since 1998, mainly involved in .NET and C# development as teacher, trainer, consultant in France. He is the CTO of Uno Platform in Montreal, a framework aiming at improving the development cycle of cross-platform apps using Windows, iOS, Android and WebAssembly using Mono and Xamarin. Uno Platform

Show Notes

Building Fiddler Importers

$
0
0

Learn how to build your own Fiddler importers with this simple guide.

This is a guest post from Eric Lawrence, written in collaboration with the Fiddler team. Would you like to write about Fiddler for our blog? Let us know in the comments below.

When reproducing a bug on a website or web service, traffic captures are invaluable because they allow developers and testers to easily see exactly what’s going on at the network level, even without having access to the reproduction environment. Over the years, Fiddler’s Session Archive Zip (SAZ) file format has become the gold standard for such captures because SAZ files are easily captured (with Fiddler, Fiddler Everywhere, FiddlerCap, or a Fiddlercore-based tool) and are easily reviewed with the powerful Fiddler desktop client.

However, in some cases, you may wish to debug network traffic that was originally captured using another tool and exported in a format like a Wireshark PCAP or the browser-standard HTTP Archive Format (HAR). Fiddler already includes importers for those formats, but what if you want to support a different format that Fiddler doesn’t yet support?

Fortunately, Fiddler is extremely extensible, and this extends to the import/export system. We’ve previously published an example exporter, and today I’d like to talk about how you can build an importer.

I’m now an engineer on the Microsoft Edge browser team, and our preview builds are now being tested by users around the world. When those users encounter problems, sometimes the best path for debugging starts with capturing a network traffic log. For some of our more technical users, that might involve collecting a SAZ file using one of the existing Fiddler-* products. But for other users (especially our upcoming MacOS users), collecting a SAZ file involves more overhead. In obscure cases, collecting a SAZ file might cause a repro to disappear.

Fortunately, the new Microsoft Edge browser is built atop the Chromium open-source browser engine, and Chromium includes a built-in network logging feature. To collect a network log in Edge, simply navigate a tab to edge://net-export (in Google Chrome, you’d visit chrome://net-export instead).

Configure the options using the radio buttons and click Start Logging to Disk. Open a new tab and reproduce the problem. After you’ve completed the repro, click Stop Logging. At the end, you’ll have a NetLog .json file containing all of the network events. The output JSON file can be viewed using a web-based viewer:

However, in many cases, it would be more desirable to view the file using Fiddler. Unlike the web based viewer (which tends to show content as a base64-encoded string of goo), Fiddler includes built-in formatters for common web formats (e.g. images, JSON, etc), and it makes it easier to export response bodies as files, resend captured requests using the Composer, and replay captured responses using the AutoResponder.

Fiddler does not include a NetLog importer by default, so let’s build one.

First, we start by looking at the Chromium documentation for the relatively straightforward file format. NetLog files are JSON-serialized streams of network events. The low-level events (tracking DNS lookups, TCP/IP connections, HTTPS certificate verifications, etc) are associated to a request identifier, which represents one or more network requests (what Fiddler calls a “Web Session”). So, the task of our importer is to:

  1. Parse the JSON file into a list of events
  2. Bucket the events by the request identifiers
  3. Generate one or more Fiddler Web Sessions from each bucket

The C# sourcecode for the FiddlerImportNetlog extension is available on Github. Beyond all of the typical project files, the extension itself consists of just two files, Importer.cs, FiddlerInterface.cs, and Properties\AssemblyInfo.cs.

The AssemblyInfo.cs contains just one line of interest:

[assembly: Fiddler.RequiredVersion("4.6.0.0")]

Fiddler requires this attribute to load the assembly as an extension; it specifies the minimal version of Fiddler you’ve tested the extension to run in.

The FiddlerInterface.cs file contains a simple class which implements the ISessionImporter interface. The ProfferFormat attribute on the class specifies the format type and a descriptive string to show in the import dialog:

The importer interface exposes a single method, ImportSessions, which accepts a string specifying the selected format, a dictionary of options, and an optional event handler to call back with import progress events. The function returns an array of Sessions created from the imported data.

The bulk of the extension’s logic is found in Importer.cs. The caller provides a StreamReader from which the imported file’s text is read and then parsed using the JSON.JsonDecode method made available by the using Fiddler.WebFormats statement at the top of the file. WebFormats.JSON is a fast, simple parser that reads the JSON into Hashtables (for JS objects), ArrayLists (for JS arrays), and primitive types (booleans, doubles, strings, etc).

After parsing, the importer looks up the mappings from named event types (e.g. URL_REQUEST_START_JOB) to the integer identifiers recorded inside the event entries, then parses the list of event entries, bucketizing those with a source of URL_REQUEST by the request id. Then, the ParseSessionsFromBucket method loops over each URL_REQUEST’s events to collect the data (e.g. request headers, response headers, response bodies) needed to generate a Fiddler Web Session. Two notable details:

  1. The NetLog format may have multiple request/response pairs in a single URL_REQUEST (e.g. if the request resulted in a redirect or an automatic authentication challenge/response)
  2. The NetLog format does not presently store (anywhere) the request body content for POST requests. In contrast, response body bytes may or may not be present based on the options chosen by the user when starting the logging process

After generating the list of extracted Web Sessions, the importer generates a few “mock” Sessions that allow the user to peruse the raw data captured in the file (e.g. the browser configuration, list of enabled extensions, etc).

After an hour of quick-and-dirty coding, Fiddler can now import NetLog json files:

If you’re just interested in using this importer without building it yourself, you can install it from my website. If you find any bugs, please file them!

Hopefully, you’ve found this to be a useful introduction to how easy it is to build importers to get your data into Fiddler!

Understand Basic Typescript Types

$
0
0

TypeScript is the cool (new? sort-of) kid on the block. In this article, we will explore the basics of "types," the core feature this superset gives us - and why you may care about joining in on all the funz.

So maybe you’ve heard about this thing called TypeScript. Maybe you haven’t – maybe you live in a pineapple under the sea or are still hacking away at ActionScript 2. Regardless, TypeScript is a thing, it’s happening, it’s here to stay, and it’s gaining strength by the minute.

Let’s dive in on the very basics.

Type-Script?

So what exactly -is-TypeScript (TS for short)?

TypeScript is a typed superset of JavaScript that compiles to JavaSscript.
-Microsoft

Ok so, that’s a bunch of fancy terms to say, that it’s an evolved form of JS. A superset language is a language that is built on TOP of the other one, so sort of like a Pokemon evolution without all the pretty graphics and cuteness.

You get all that is JavaScript, plus a little extra.

What about the typed part? It means that TypeScript allows you to basically tell your computer, when you’re coding, what each part of your code will hold. Think of it as putting labels on your variables and functions to make them strict on what they should be doing or containing.

Ok so, what about the compiles to JavaScript? Well, currently TypeScript is not something that browsers can understand. Browsers speak JavaScript. So when you work with this superset, you are going to have to use some sort of tool to change it back to JavaScript before you deploy.

This all sounds super scary? Maybe. Ever worked with SCSS? Aha! Well, that’s another superset but for CSS.

Let’s talk Moneyz

Variables in JavaScript don’t have types. This means you can basically put anything and everything you want into a variable and it’s completely valid code. For example:

let myBox ='This would be a string';
myBox =23;// But now it's a number
myBox ={ box:true};// I can also put objects
myBox =['there is no other like it','and this one is mine'];// and arrays :)

This code above is completely valid JavaScript code, a variable can change its contents as needed because the box (variable) that contains this content is flexible and will adjust to whatever you pass to it.

Sweet, ok, but why would I want to lose this flexibility?

We’ve all run into a scenario where something like this happens. You have a variable, which is a number, let’s say it’s the user’s bank account moneyz.

let moneyz =20;// Yes, this is how I spell it, let me be

So far so good, but maybe you make a call somewhere that is going to tell you how much money was just deposited into the account. But something somewhere in the land of “omg how am i going to debug this” made the second value a STRING.

So you merrily code a function:

functionincrementMoneyz(oldMoneyz, newMoneyz){return oldMoneyz + newMoneyz;}

However in reality now you have a case where, say, you’re adding up a number and a string. So the 20 she had before, added to the “20” she just deposited into her account…

let moneyz =20;let deposit ="20";let moarMoneyz =incrementMoneyz(moneyz, deposit);// Result => "2020"

Now, TypeScript is not going to protect you from runtime bugs and wild APIs, because in the end when you deploy your code it’s going to be good old JavaScript. But if this error is happening because of some oversight in your code, TypeScript is going to yell at you and hopefully prevent this mistake from happening.

So how then, do we set types on our variables?

Type-Proofing

Super simple, let’s learn by example. Switch gears to TypeScript (TS from now on, fingers hurt).

// This is TYPESCRIPTlet number =20;
number ='20';// ERROR

In TS, the compiler checks your code like big brother. So, in the above example, you are declaring a number variable old-school, no type. However, TS will know and label this variable as a Number. Why? Because you’re putting a number in it initially!

So what’s going to happen is that when you compile this code, TS will throw an error because '20' is not a number, it’s a string.

Simple, right? This type of, well… type… is called an inferred type. That mean’s you’re leaving TS all the heavy lifting.

What if, however, we want to keep all the control of types for ourselves? Then we have to explicitly tell TS what we want.

let typedNumber: number =20;
typedNumber ='20';

Same example, but I switched the variable name to typeNumber for clarity.

See the : number part after the variable declaration? That’s the actual type! It reads:

I want a new variable, called typedNumber, with a type of a number value, and the initial value is 20.

When the compiler hits the second line, and sees the string, it will also complain and throw and error - as expected.

Numbers, check. What other types are there though?

Booleans

let myBool: boolean =false;let emptyBool: boolean;// Yup, you can declare it without initial value!
emptyBool ='false';// Error, emptyBool has a type of boolean!

Strings

let myString: string ='TS is bae';
myString =['EVIL'];// Error! myString is typed as stringlet quotes: string ="You can also use double quotes :)";let awe: string =":o"let backtick: string =`And backticks! ${awe}`;

Arrays

let anArrayOfNumbers: number[]=[1,2,3];let alsoAnArrayOfNumbers: Array<number>=[1,2,3];let anArrayOfStrings: string[]=['a','b','c'];let alsoAnArrayOfStrings: Array<string>=['a','b','c'];

Arrays have a little gotcha to how they’re typed, because as you can see from the above example, we have two different syntaxes. In the first one, we tell the compiler the contents of the array plus []. On the second syntax, we first say it's an Array and then what’s going to be in it.

Feel free to use whichever works better for you.

Tuple

So what is a Tuple?

Simply put, a Tuple is an array of a defined/fixed number of elements which you know what they are beforehand, and that have different types.

This is best explained with an example. Imagine you have an API for users, which you know returns an array with the name at index 0, the last name at index 1, and the age at index 2.

// ['Marina', 'Mosti', 32];let userTuple:[string, string, number];
userTuple =['Daenerys','Targaryen',17];//This is valid. Also. How you doin'?
userTuple =['Sansa','Stark','IDK kill her already'];// This would be an error

Keep in mind that although Tuples are super useful, they are only to be used in the particular cases where you know that your data is going to have this format, because even a simple thing like a change in array index will break the type.

Enum

Enum is another new datatype in TypeScript that we get to play with.

Ever been in a scenario where you want to define a list to just make some reference later on, and you end up doing a bunch of constant literal strings to keep track of it?

Take for example a user’s membership to a site, which has a few defined options.

enum Membership { Free, Member, Gold   }

The enum here is just defining a list. This is not a variable - imagine that you’re only making the blueprint for this type.

Now to actually use it, we could do the following:

enum Membership { Free, Member, Gold   }const userMembership: Membership = Membership.Free;

Take a look at the const variable that we’re setting up. The type is actually Membership, which is referencing the enum that we created before. This way, we can actually access Membership.Free and set it to this variable.

Keep in mind something important: The actual value of the variable is a number! The enum actually behaves like an array, where each item gets a 0 index incremental value. You can also override these internal values by setting them with an equal sign.

enum Membership { Free =3, Member, Gold   }

In this case, Member would be 4, and Gold would be 5 - they increment starting from the last known value, in this case 3, defined by Free.

But, I’m a Free Spirit!

So you may argue at some point that not having types is actually a benefit. That you prefer your functions be be able to return 3 different types of data depending on output. That you will not succumb to these chains!

Good news! There's one more datatype for specially this type of situation: Any

As the name already tells you, this type makes the variable behave in the same way that plain JavaScript does.

const anarchy: Any ='YAAAAS';
anarchy =9000;// works! :)

Wrapping Up

TypeScript is coming with force into the FE dev world, if you have been putting it on hold to start playing with it and learning, now is the time to pick it up and start polishing your typed-skills.

If you want to read/learn more about I recommend diving into the handbook and tutorials they have up in their official website.

Thanks for reading!


This post has been brought to you by Kendo UI

Want to learn more about creating great web apps? It all starts out with Kendo UI - the complete UI component library that allows you to quickly build high-quality, responsive apps. It includes everything you need, from grids and charts to dropdowns and gauges.

KendoJSft

Viewing all 4112 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>