The new 2.5 release of the CORS Filter for handling cross-domain requests offers improved performance. This benefits Java web servers that handle lots of traffic, particularly situations when a significant proportion of that is invalid or unauthorised CORS requests.
The improvement is achieved by using static (cached) exceptions within the filter. Here is an an informative discussion with metrics about Java exception handling and how it can be sped up.
The 2.5 release also fixes a NPE bug affecting Origin validation during configuration.
Version 2.4 of the Java CORS Filter for handling cross-domain requests has added support for automatic reconfiguration. You can change your CORS policy at runtime without having to reload your web service or application. Kudos to Alexey Zvolinsky for contributing this cool new feature.
Automatic reconfiguration is provided by a special variant of the CORS Filter. Stick the following declaration into your
web.xml file to use it:
<filter> <filter-name>CORS</filter-name> <filter-class>com.thetransactioncompany.cors.autoreconf.AutoReconfigurableCORSFilter</filter-class> </filter>
This filter variant must be configured with an external Java properties file. The filter
init-param style configuration will not work here as the
web.xml file may not be modified at runtime.
The configuration file will be polled for changes every 20 seconds. If a change is detected the filter will automatically reload itself with the new configuration. If the new configuration is invalid an error message will be printed to the server log and the filter will continue operating with its previous intact settings.
Check out the CORS Filter docs for the complete instructions on setting up automatic reconfiguration.
Four years after its inception it was time for a new major 2.0 release of the CORS Filter that enables Java web applications to handle cross-domain requests according to the W3C spec for Cross-Origin Resource Sharing.
What are the highlights of the new 2.0 version?
ServletResponse.reset(). Some web applications and frameworks (e.g. RestEasy) reset the servlet response when a HTTP 4xx error is produced. The wrapper ensures previously set CORS headers survive such a reset. The credits for this patch go to Gervasio Amy from Argentina.
The new release was pushed to Maven Central and should be available from there by tomorrow.
<groupId>com.thetransactioncompany</groupId> <artifactId>cors-filter</artifactId> <version>2.0</version>
What’s in it:
To download a copy:
To test OpenID Connect login with the server online:
OpenID Connect is a new web standard for OAuth 2.0 – based sign-on and identity provision. It was inspired by OAuth 2.0’s massive success and adoption (Facebook, Google, etc) in recent years, helped by the protocol’s focus on ease of client app integration, a crucial factor for attracting social and consumer app developers in large numbers.
The OpenID Connect WG was formed couple of years ago by experts in the field who understood OAuth 2.0’s potential, and set out to define a simple identity layer on top of it by coining a JSON-based identity token (JWT) and a UserInfo endpoint where client apps can retrieve consented profile information about the end-user. All this has been designed to mesh nicely with OAuth 2.0’s existing flows and tokens, while satisfying a wide range of applications in the social, consumer, and enterprise domains.
The plans for our next release are outlined in the Connect2id server roadmap. But until we proceed with it we’re going to have a few days of well deserved rest 🙂
What’s in the new 2.8.1 release of JsWorld?
The latest 2.24 release of the Nimbus JOSE + JWT library removes the Apache Commons Codec dependency for base 64 and base 64 URL-safe encoding / decoding, by switching to an internal codec (from the migbase64 project). This should greatly ease use of the library on Android devices.
Thanks to everyone who contributed with patches and suggestions (and endured the long wait J ).
The new release should reach Maven Central by the end of today. Alternatively, you can get a copy of the library JAR from the download section of the git repo.
OpenID Connect is an official standard as of today. The specification was approved after voting by the OpenID Foundation and this marks the completion of the long and laboursome process to design and specify a new single sign-on (SSO) protocol for the Internet based on the successful OAuth 2.0 framework.
We began development of an OpenID Connect server for enterprises in early 2012 and want to thank everyone on the Connect, OAuth and JOSE workgroups for contributing to the standard and providing us with guidance on the many questions that we faced as we worked on the SDK and the Connect2id server.
The official announcement can be read on the foundation’s website.
Today we put up an online demo of the Connect2ID server along with a generic OpenID Connect client. With that we wish to show the capabilities of the new internet standard for single sign-on (SSO) based on the successful OAuth 2.0 framework. OpenID Connect is designed to sign users onto web as well as native apps and also provides a standard extensible schema for provisioning user details (called UserInfo) such as email, name and contact information to client applications.
The OpenID Connect 1.0 specification is expected to become final in spring of 2014. Around the same time we prepare to release our Connect2ID server for business customers.
You can test the OpenID Connect login by going to https://demo.c2id.com/oidc-client.
Just click on “Login with OpenID Connect” and when you’re redirected to the IdP server enter “alice” + “secret” as credentials.
Upon returning to the OpenID Connect client you should see the process of decoding the authentication response, making the token request, verifying the ID token and extracting its content, and finally the UserInfo request being made. The client was built with our open source OAuth 2.0 SDK with OpenID Connect extensions.
The demo Connect2ID server is set to remember user sessions for 15 minutes, so if you come back to it within that time you will be redirected straight to the consent form.
The OpenID Connect client has also two other tabs – “Provider details” and “Client details” where you can configure it to speak to another public OpenID Connect server (IdP). We intend to add more OpenID Connect request options to the client UI in future.
Coding distributed services and apps often calls for marshalling Java objects into a binary form that can be streamed over the network. Infinispan, for example, requires objects to be serialised so they can be multicast to the nodes of the data grid.
The standard Java serialisation implementation packs a lot of object data in order to be able to automatically reproduce an object upon deserialisation. Devising your custom serialiser can save a lot of bytes, as the following snippet demonstrates:
// Serialise a date object Date now = new Date(); // As object ByteArrayOutputStream bout = new ByteArrayOutputStream(); ObjectOutput oout = new ObjectOutputStream(bout); oout.writeObject(now); oout.flush(); System.out.println("Date writeObject length: " + bout.toByteArray().length); // As long representation bout = new ByteArrayOutputStream(); oout = new ObjectOutputStream(bout); oout.writeLong(now.getTime()); oout.flush(); System.out.println("Date writeLong length: " + bout.toByteArray().length);
Running the above code produces the following result:
Date writeObject length: 46 Date writeLong length: 14
Amazing, converting the Date object to its long representation results in a three-fold saving. For every 1 million transmitted Date objects that means a difference of 32 Mbytes.
The conclusion is clear: custom serialisers make sense. They can reduce network traffic and speed up the overall responsiveness of your distributed service or app.
Developers are typically given the standard DataOutput interface to implement their custom serialisers. This is true for Infinispan, JBoss Marshalling, and probably other frameworks too. Use of the DataOutput interface is straightforward – you just have to pick the correct method for the data type that you wish to marshal. For serialising strings, however, there are three methods provided:
Let’s find out which one is the most efficient:
String s = "The quick brown fox jumps over the lazy dog"; ByteArrayOutputStream bout = new ByteArrayOutputStream(); ObjectOutput oout = new ObjectOutputStream(bout); oout.writeUTF(s); oout.flush(); System.out.println("writeUTF length: " + bout.toByteArray().length); bout = new ByteArrayOutputStream(); oout = new ObjectOutputStream(bout); oout.writeBytes(s); oout.flush(); System.out.println("writeBytes length: " + bout.toByteArray().length); bout = new ByteArrayOutputStream(); oout = new ObjectOutputStream(bout); oout.writeChars(s); oout.flush(); System.out.println("writeChars length: " + bout.toByteArray().length); bout = new ByteArrayOutputStream(); oout = new ObjectOutputStream(bout); oout.writeObject(s); oout.flush(); System.out.println("writeObject length: " + bout.toByteArray().length);
Running the above produces the following:
writeUTF length: 51 writeBytes length: 49 writeChars length: 92 writeObject length: 50
writeBytes methods output similar byte lengths. Surprisingly so does
writeObject. This could be due to Java treating strings in
writeObject in a optimised way.
writeChars however produces a byte output that is almost twice as long.
The conclusion is that for strings you could use any of the available methods, but stay clear of
Happy coding! 🙂