Category Archives: Advices

A source for information from the Societies and consultants…

Update: CinemaCon/SMPTE/NAB 2014

CinemaCon has placed their Schedule of Events Online.

2014 Schedule of Events – CinemaCon — Celebrating the Moviegoing Experience

March 24-27 – CinemaCon — March 24 – 27 (Monday thru Thursday)

SMPTE/NAB Technology Summit On Cinema April 5 and 6 (Saturday and Sunday before the NAB Exhibits open.

NAB Show — Exhibits April 7 – 10; Monday thru Thursday

Yes, this is a reversal from previous years. Somewhat the same time spans though. If one were to go to the CinemaCon, leaving Thursday, there would be 9 days before getting morning cakes and coffee and sitting for 2 days through several excellent SMPTE presentations.

There was a time when this event was the place for everyone to get caught up to the current methods that the digital cinema industry was developing and using to go from 100 to 1,000 to 10,000 installed systems. In a lot of regards it was catch and patch can. But for the last few years it has been more pointedly scientific reports on how to get to the next level, that level that is beyond the idiosyncratic 24 frame movie. It is hard to predict from the agenda how it could be better than the seminars of the last 2 years.

Meanwhile, CinemaCon gets to celebrate another year of record income and what looks like another good year following.

Update: CinemaCon/SMPTE/NAB 2014

CinemaCon has placed their Schedule of Events Online.

2014 Schedule of Events – CinemaCon — Celebrating the Moviegoing Experience

March 24-27 – CinemaCon — March 24 – 27 (Monday thru Thursday)

SMPTE/NAB Technology Summit On Cinema April 5 and 6 (Saturday and Sunday before the NAB Exhibits open.

NAB Show — Exhibits April 7 – 10; Monday thru Thursday

Yes, this is a reversal from previous years. Somewhat the same time spans though. If one were to go to the CinemaCon, leaving Thursday, there would be 9 days before getting morning cakes and coffee and sitting for 2 days through several excellent SMPTE presentations.

There was a time when this event was the place for everyone to get caught up to the current methods that the digital cinema industry was developing and using to go from 100 to 1,000 to 10,000 installed systems. In a lot of regards it was catch and patch can. But for the last few years it has been more pointedly scientific reports on how to get to the next level, that level that is beyond the idiosyncratic 24 frame movie. It is hard to predict from the agenda how it could be better than the seminars of the last 2 years.

Meanwhile, CinemaCon gets to celebrate another year of record income and what looks like another good year following.

Security Updates; Warning Warning Warning Office and XP Users

But some of the Firefox Add-Ons are pretty cool. What appeals to one may not appeal to others, but I am certain there is something for everyone.

 

The info about CinemaCon is about to start coming fast and furious. As always, it will be at the Caesars Palace in Las Vegas…this year on March 24-27, with the trade show from the 25th. The program is otherwise not set, so the best link is: Sign up for Updates

This year the dates of NAB are April 5 – 10, with exhibits starting on the 7th. One presumes that the SMPTE/NAB Cinema event – often the coolest event of the year – will be on the weekend of the 5th and 6th.

Your DCinemaTools.com staff is busy putting the finishing touches on the Digital eXperience Guardian from Digital Test Tools, LLC. This is more than distracting. We are also spending a lot of time on SMPTE study groups and a committee or two. This is highly provocative and recommended for all.

By the way, MAKE CERTAIN TO CHECK ADOBE PREFERENCES when you update. They have the audacity to change them, so if you prohibit Adobe from storing data on your site, they will change that back to Allow.

Security Updates; Warning Warning Warning Office and XP Users

But some of the Firefox Add-Ons are pretty cool. What appeals to one may not appeal to others, but I am certain there is something for everyone.

 

The info about CinemaCon is about to start coming fast and furious. As always, it will be at the Caesars Palace in Las Vegas…this year on March 24-27, with the trade show from the 25th. The program is otherwise not set, so the best link is: Sign up for Updates

This year the dates of NAB are April 5 – 10, with exhibits starting on the 7th. One presumes that the SMPTE/NAB Cinema event – often the coolest event of the year – will be on the weekend of the 5th and 6th.

Your DCinemaTools.com staff is busy putting the finishing touches on the Digital eXperience Guardian from Digital Test Tools, LLC. This is more than distracting. We are also spending a lot of time on SMPTE study groups and a committee or two. This is highly provocative and recommended for all.

By the way, MAKE CERTAIN TO CHECK ADOBE PREFERENCES when you update. They have the audacity to change them, so if you prohibit Adobe from storing data on your site, they will change that back to Allow.

Blending the Basics of Audio and Light (Pictures): Part One

It would be truly special if there was a knob-based solution for these issues, but a knob doesn’t even work for the first problem of increased volume (pardon the pun). To explain the reasons why goes into the gory details of reminding the reader that the difference between 80 watts of amplifier power and 100 watts of amplifier power is – at best – only 1dB of power at the speaker.

“At best” includes all the mitigating factors of distance and direction and absorption/reflection/phase that waves get tampered by. The effects of cable and connectors, the type of box that the speakers are in and the speakers themselves will determine whether adding more power at the front of the process will add significantly at the far end of the process (the audience’s ears), not only with quantity, but without making the quality worse.

And let’s not forget that an audience in thick winter clothing will hear something different than the mixer who was behind a flat reflective mixing console in comfortable clothes. Nor should we forget that the human hearing system doesn’t perceived all frequencies the same: for example, an increase of 2dB at 1kHz will require x power, but the same increase in perceived volume at 100Hz would require several times x power, and the lower the frequency the more power needed. See:Equal_Loudness_Contours and Phon.

The higher frequencies have their own problems getting through the perforations of the movie screen, but they also have problems with the sound waves that don’t make it through and which bounce back into the speaker that is busy producing the next waves. Increased level just increases phasing problems, and that ain’t the half of it. This and other facts are detailed in two papers in the December 2012 SMPTE Motion Imaging Magazine: Further Investigations Into the Interactions Between Cinema Loudspeakers and Screens and Can We Make Movie Theaters Sound Better and More Consistent? [Membership link]

So, there. Audio. From being able to hear a mosquito buzz at 3 meters, to understanding the roar of a jet engine (for not more than a slice of a second please…), that’s 15 orders of magnitude of sound pressure whether you measure it as Watts: 10-12 – 103 W/m2, or dB of Sound Intensity level (a logarithmic scale): 0 -150 dB, or in Pressure in Pa: 0.00002 to who knows…maybe 600 rms. What ever unit is used, what they are measuring is called Dynamic Range.

And just in case you were wondering, it isn’t much different for light. The amount of light that the human visual system can handle is defined by the sun (like the jet engine, not to be endured for not more than an extremely small slice of a second), which measures in at 1.6×109 cd/m2 at noon, while the dimmest that the eye can see, the threshold of vision, is 0.000003 cd/m2. Give or take an order of magnitude, sound and vision have much the same dynamic range. For both the ears and the eyes there are also a number of ranging mechanisms to protect the system while allowing this dynamic range – some physical, some chemical, some electronic.

Both sound and vision deal with waves and and a vary tight bandwidth of frequencies. It is these that we deal with when we design systems to play movies and sound in an auditorium. Fortunately they are spaced so far apart that they don’t interact with each other, but that is about the only benefit a scientist has when trying to reproduce stimulus recorded and stimulus played back. These waves interact and react differently at small differences in distance.

Which brings us to the end of Part One of this study. Further on we will look at what immersive audio systems and better screens and laser light engines are trying to solve – and it isn’t some conspiracy to put butts in seats and money in someone’s pocket. There are real problems with real compromises at every decision point. We’ll discuss speaker excursions and laser frequencies on the power curve in the next article.

Blending the Basics of Audio and Light (Pictures): Part One

It would be truly special if there was a knob-based solution for these issues, but a knob doesn’t even work for the first problem of increased volume (pardon the pun). To explain the reasons why goes into the gory details of reminding the reader that the difference between 80 watts of amplifier power and 100 watts of amplifier power is – at best – only 1dB of power at the speaker.

“At best” includes all the mitigating factors of distance and direction and absorption/reflection/phase that waves get tampered by. The effects of cable and connectors, the type of box that the speakers are in and the speakers themselves will determine whether adding more power at the front of the process will add significantly at the far end of the process (the audience’s ears), not only with quantity, but without making the quality worse.

And let’s not forget that an audience in thick winter clothing will hear something different than the mixer who was behind a flat reflective mixing console in comfortable clothes. Nor should we forget that the human hearing system doesn’t perceived all frequencies the same: for example, an increase of 2dB at 1kHz will require x power, but the same increase in perceived volume at 100Hz would require several times x power, and the lower the frequency the more power needed. See:Equal_Loudness_Contours and Phon.

The higher frequencies have their own problems getting through the perforations of the movie screen, but they also have problems with the sound waves that don’t make it through and which bounce back into the speaker that is busy producing the next waves. Increased level just increases phasing problems, and that ain’t the half of it. This and other facts are detailed in two papers in the December 2012 SMPTE Motion Imaging Magazine: Further Investigations Into the Interactions Between Cinema Loudspeakers and Screens and Can We Make Movie Theaters Sound Better and More Consistent? [Membership link]

So, there. Audio. From being able to hear a mosquito buzz at 3 meters, to understanding the roar of a jet engine (for not more than a slice of a second please…), that’s 15 orders of magnitude of sound pressure whether you measure it as Watts: 10-12 – 103 W/m2, or dB of Sound Intensity level (a logarithmic scale): 0 -150 dB, or in Pressure in Pa: 0.00002 to who knows…maybe 600 rms. What ever unit is used, what they are measuring is called Dynamic Range.

And just in case you were wondering, it isn’t much different for light. The amount of light that the human visual system can handle is defined by the sun (like the jet engine, not to be endured for not more than an extremely small slice of a second), which measures in at 1.6×109 cd/m2 at noon, while the dimmest that the eye can see, the threshold of vision, is 0.000003 cd/m2. Give or take an order of magnitude, sound and vision have much the same dynamic range. For both the ears and the eyes there are also a number of ranging mechanisms to protect the system while allowing this dynamic range – some physical, some chemical, some electronic.

Both sound and vision deal with waves and and a vary tight bandwidth of frequencies. It is these that we deal with when we design systems to play movies and sound in an auditorium. Fortunately they are spaced so far apart that they don’t interact with each other, but that is about the only benefit a scientist has when trying to reproduce stimulus recorded and stimulus played back. These waves interact and react differently at small differences in distance.

Which brings us to the end of Part One of this study. Further on we will look at what immersive audio systems and better screens and laser light engines are trying to solve – and it isn’t some conspiracy to put butts in seats and money in someone’s pocket. There are real problems with real compromises at every decision point. We’ll discuss speaker excursions and laser frequencies on the power curve in the next article.

How I Explained REST to My Wife

Wife: Who is Roy Fielding?

Ryan: Some guy. He’s smart.

Wife: Oh? What did he do?

Ryan: He helped write the first web servers and then did a ton of research explaining why the web works the way it does. His name is on the specification for the protocol that is used to get pages from servers to your browser.

Wife: How does it work?

Ryan: The web?

Wife: Yeah.

Ryan: Hmm. Well, it’s all pretty amazing really. And the funny thing is that it’s all very undervalued. The protocol I was talking about, HTTP, it’s capable of all sorts of neat stuff that people ignore for some reason.

Wife: You mean http like the beginning of what I type into the browser?

Ryan: Yeah. That first part tells the browser what protocol to use. That stuff you type in there is one of the most important breakthroughs in the history of computing.

Wife: Why?

Ryan: Because it is capable of describing the location of something anywhere in the worldfrom anywhere in the world. It’s the foundation of the web. You can think of it like GPS coordinates for knowledge and information.

Wife: For web pages?

Ryan: For anything really. That guy, Roy Fielding, he talks a lot about what those things point to in that research I was talking about. The web is built on an architectural style called REST. REST provides a definition of a resource, which is what those things point to.

Wife: A web page is a resource?

Ryan: Kind of. A web page is a representation of a resource. Resources are just concepts. URLs–those things that you type into the browser…

Wife: I know what a URL is..

Ryan: Oh, right. Those tell the browser that there’s a concept somewhere. A browser can then go ask for a specific representation of the concept. Specifically, the browser asks for the web page representation of the concept.

Wife: What other kinds of representations are there?

Ryan: Actually, representations is one of these things that doesn’t get used a lot. In most cases, a resource has only a single representation. But we’re hoping that representations will be used more in the future because there’s a bunch of new formats popping up all over the place.

Wife: Like what?

Ryan: Hmm. Well, there’s this concept that people are calling Web Services. It means a lot of different things to a lot of different people but the basic concept is that machines could use the web just like people do.

Wife: Is this another robot thing?

Ryan: No, not really. I don’t mean that machines will be sitting down at the desk and browsing the web. But computers can use those same protocols to send messages back and forth to each other. We’ve been doing that for a long time but none of the techniques we use today work well when you need to be able to talk to all of the machines in the entire world.

Wife: Why not?

Ryan: Because they weren’t designed to be used like that. When Fielding and his buddies started building the web, being able to talk to any machine anywhere in the world was a primary concern. Most of the techniques we use at work to get computers to talk to each other didn’t have those requirements. You just needed to talk to a small group of machines.

Wife: And now you need to talk to all the machines?

Ryan: Yes – and more. We need to be able to talk to all machines about all the stuff that’s on all the other machines. So we need some way of having one machine tell another machine about a resource that might be on yet another machine.

Wife: What?

Ryan: Let’s say you’re talking to your sister and she wants to borrow the sweeper or something. But you don’t have it – your Mom has it. So you tell your sister to get it from your Mom instead. This happens all the time in real life and it happens all the time when machines start talking too.

Wife: So how do the machines tell each other where things are?

Ryan: The URL, of course. If everything that machines need to talk about has a corresponding URL, you’ve created the machine equivalent of a noun. That you and I and the rest of the world have agreed on talking about nouns in a certain way is pretty important, eh?

Wife: Yeah.

Ryan: Machines don’t have a universal noun – that’s why they suck. Every programming language, database, or other kind of system has a different way of talking about nouns. That’s why the URL is so important. It let’s all of these systems tell each other about each other’s nouns.

Wife: But when I’m looking at a web page, I don’t think of it like that.

Ryan: Nobody does. Except Fielding and handful of other people. That’s why machines still suck.

Wife: What about verbs and pronouns and adjectives?

Ryan: Funny you asked because that’s another big aspect of REST. Well, verbs are anyway.

Wife: I was just joking.

Ryan: It was a funny joke but it’s actually not a joke at all. Verbs are important. There’s a powerful concept in programming and CS theory called polymorphism. That’s a geeky way of saying that different nouns can have the same verb applied to them.

Wife: I don’t get it.

Ryan: Well.. Look at the coffee table. What are the nouns? Cup, tray, newspaper, remote. Now, what are some things you can do to all of these things?

Wife: I don’t get it…

Ryan: You can get them, right? You can pick them up. You can knock them over. You can burn them. You can apply those same exact verbs to any of the objects sitting there.

Wife: Okay… so?

Ryan: Well, that’s important. What if instead of me being able to say to you, “get the cup,” and “get the newspaper,” and “get the remote”; what if instead we needed to come up with different verbs for each of the nouns? I couldn’t use the word “get” universally, but instead had to think up a new word for each verb/noun combination.

Wife: Wow! That’s weird.

Ryan: Yes, it is. Our brains are somehow smart enough to know that the same verbs can be applied to many different nouns. Some verbs are more specific than others and apply only to a small set of nouns. For instance, I can’t drive a cup and I can’t drink a car. But some verbs are almost universal like GET, PUT, and DELETE.

Wife: You can’t DELETE a cup.

Ryan: Well, okay, but you can throw it away. That was another joke, right?

Wife: Yeah.

Ryan: So anyway, HTTP–this protocol Fielding and his friends created–is all about applying verbs to nouns. For instance, when you go to a web page, the browser does an HTTP GET on the URL you type in and back comes a web page.

Web pages usually have images, right? Those are separate resources. The web page just specifies the URLs to the images and the browser goes and does more HTTP GETs on them until all the resources are obtained and the web page is displayed. But the important thing here is that very different kinds of nouns can be treated the same. Whether the noun is an image, text, video, an mp3, a slideshow, whatever. I can GET all of those things the same way given a URL.

Wife: Sounds like GET is a pretty important verb.

Ryan: It is. Especially when you’re using a web browser because browsers pretty much justGET stuff. They don’t do a lot of other types of interaction with resources. This is a problem because it has led many people to assume that HTTP is just for GETing. But HTTP is actually ageneral purpose protocol for applying verbs to nouns.

Wife: Cool. But I still don’t see how this changes anything. What kinds of nouns and verbs do you want?

Ryan: Well the nouns are there but not in the right format.

Think about when you’re browsing around amazon.com looking for things to buy me for Christmas. Imagine each of the products as being nouns. Now, if they were available in a representation that a machine could understand, you could do a lot of neat things.

Wife: Why can’t a machine understand a normal web page?

Ryan: Because web pages are designed to be understood by people. A machine doesn’t care about layout and styling. Machines basically just need the data. Ideally, every URL would have a human readable and a machine readable representation. When a machine GETs the resource, it will ask for the machine readable one. When a browser GETs a resource for a human, it will ask for the human readable one.

Wife: So people would have to make machine formats for all their pages?

Ryan: If it were valuable.

Look, we’ve been talking about this with a lot of abstraction. How about we take a real example. You’re a teacher – at school I bet you have a big computer system, or three or four computer systems more likely, that let you manage students: what classes they’re in, what grades they’re getting, emergency contacts, information about the books you teach out of, etc. If the systems are web-based, then there’s probably a URL for each of the nouns involved here: student, teacher, class, book, room, etc. Right now, getting the URL through the browser gives you a web page. If there were a machine readable representation for each URL, then it would be trivial to latch new tools onto the system because all of that information would be consumable in a standard way. It would also make it quite a bit easier for each of the systems to talk to each other. Or, you could build a state or country-wide system that was able to talk to each of the individual school systems to collect testing scores. The possibilities are endless.

Each of the systems would get information from each other using a simple HTTP GET. If one system needs to add something to another system, it would use an HTTP POST. If a system wants to update something in another system, it uses an HTTP PUT. The only thing left to figure out is what the data should look like.

Wife: So this is what you and all the computer people are working on now? Deciding what the data should look like?

Ryan: Sadly, no. Instead, the large majority are busy writing layers of complex specifications for doing this stuff in a different way that isn’t nearly as useful or eloquent. Nouns aren’t universal and verbs aren’t polymorphic. We’re throwing out decades of real field usage and proven technique and starting over with something that looks a lot like other systems that have failed in the past. We’re using HTTP but only because it helps us talk to our network and security people less. We’re trading simplicity for flashy tools and wizards.

Wife: Why?

Ryan: I have no idea.

Wife: Why don’t you say something?

Ryan: Maybe I will.

How I Explained REST to My Wife

Ryan Tomayko

Sunday, December 12, 2004

Translations of the following dialog available in JapaneseFrenchVietnameseItalianSpanish,Portuguese, and Chinese. Huge thanks to YAMAMOTO YoheiKarl DubostjishinBarbzTordekEdgard Arakakikeven lw, respectively. If you know of additional translations, please leave a comment with the location.

How I Explained REST to My Wife

Wife: Who is Roy Fielding?

Ryan: Some guy. He’s smart.

Wife: Oh? What did he do?

Ryan: He helped write the first web servers and then did a ton of research explaining why the web works the way it does. His name is on the specification for the protocol that is used to get pages from servers to your browser.

Wife: How does it work?

Ryan: The web?

Wife: Yeah.

Ryan: Hmm. Well, it’s all pretty amazing really. And the funny thing is that it’s all very undervalued. The protocol I was talking about, HTTP, it’s capable of all sorts of neat stuff that people ignore for some reason.

Wife: You mean http like the beginning of what I type into the browser?

Ryan: Yeah. That first part tells the browser what protocol to use. That stuff you type in there is one of the most important breakthroughs in the history of computing.

Wife: Why?

Ryan: Because it is capable of describing the location of something anywhere in the worldfrom anywhere in the world. It’s the foundation of the web. You can think of it like GPS coordinates for knowledge and information.

Wife: For web pages?

Ryan: For anything really. That guy, Roy Fielding, he talks a lot about what those things point to in that research I was talking about. The web is built on an architectural style called REST. REST provides a definition of a resource, which is what those things point to.

Wife: A web page is a resource?

Ryan: Kind of. A web page is a representation of a resource. Resources are just concepts. URLs–those things that you type into the browser…

Wife: I know what a URL is..

Ryan: Oh, right. Those tell the browser that there’s a concept somewhere. A browser can then go ask for a specific representation of the concept. Specifically, the browser asks for the web page representation of the concept.

Wife: What other kinds of representations are there?

Ryan: Actually, representations is one of these things that doesn’t get used a lot. In most cases, a resource has only a single representation. But we’re hoping that representations will be used more in the future because there’s a bunch of new formats popping up all over the place.

Wife: Like what?

Ryan: Hmm. Well, there’s this concept that people are calling Web Services. It means a lot of different things to a lot of different people but the basic concept is that machines could use the web just like people do.

Wife: Is this another robot thing?

Ryan: No, not really. I don’t mean that machines will be sitting down at the desk and browsing the web. But computers can use those same protocols to send messages back and forth to each other. We’ve been doing that for a long time but none of the techniques we use today work well when you need to be able to talk to all of the machines in the entire world.

Wife: Why not?

Ryan: Because they weren’t designed to be used like that. When Fielding and his buddies started building the web, being able to talk to any machine anywhere in the world was a primary concern. Most of the techniques we use at work to get computers to talk to each other didn’t have those requirements. You just needed to talk to a small group of machines.

Wife: And now you need to talk to all the machines?

Ryan: Yes – and more. We need to be able to talk to all machines about all the stuff that’s on all the other machines. So we need some way of having one machine tell another machine about a resource that might be on yet another machine.

Wife: What?

Ryan: Let’s say you’re talking to your sister and she wants to borrow the sweeper or something. But you don’t have it – your Mom has it. So you tell your sister to get it from your Mom instead. This happens all the time in real life and it happens all the time when machines start talking too.

Wife: So how do the machines tell each other where things are?

Ryan: The URL, of course. If everything that machines need to talk about has a corresponding URL, you’ve created the machine equivalent of a noun. That you and I and the rest of the world have agreed on talking about nouns in a certain way is pretty important, eh?

Wife: Yeah.

Ryan: Machines don’t have a universal noun – that’s why they suck. Every programming language, database, or other kind of system has a different way of talking about nouns. That’s why the URL is so important. It let’s all of these systems tell each other about each other’s nouns.

Wife: But when I’m looking at a web page, I don’t think of it like that.

Ryan: Nobody does. Except Fielding and handful of other people. That’s why machines still suck.

Wife: What about verbs and pronouns and adjectives?

Ryan: Funny you asked because that’s another big aspect of REST. Well, verbs are anyway.

Wife: I was just joking.

Ryan: It was a funny joke but it’s actually not a joke at all. Verbs are important. There’s a powerful concept in programming and CS theory called polymorphism. That’s a geeky way of saying that different nouns can have the same verb applied to them.

Wife: I don’t get it.

Ryan: Well.. Look at the coffee table. What are the nouns? Cup, tray, newspaper, remote. Now, what are some things you can do to all of these things?

Wife: I don’t get it…

Ryan: You can get them, right? You can pick them up. You can knock them over. You can burn them. You can apply those same exact verbs to any of the objects sitting there.

Wife: Okay… so?

Ryan: Well, that’s important. What if instead of me being able to say to you, “get the cup,” and “get the newspaper,” and “get the remote”; what if instead we needed to come up with different verbs for each of the nouns? I couldn’t use the word “get” universally, but instead had to think up a new word for each verb/noun combination.

Wife: Wow! That’s weird.

Ryan: Yes, it is. Our brains are somehow smart enough to know that the same verbs can be applied to many different nouns. Some verbs are more specific than others and apply only to a small set of nouns. For instance, I can’t drive a cup and I can’t drink a car. But some verbs are almost universal like GET, PUT, and DELETE.

Wife: You can’t DELETE a cup.

Ryan: Well, okay, but you can throw it away. That was another joke, right?

Wife: Yeah.

Ryan: So anyway, HTTP–this protocol Fielding and his friends created–is all about applying verbs to nouns. For instance, when you go to a web page, the browser does an HTTP GET on the URL you type in and back comes a web page.

Web pages usually have images, right? Those are separate resources. The web page just specifies the URLs to the images and the browser goes and does more HTTP GETs on them until all the resources are obtained and the web page is displayed. But the important thing here is that very different kinds of nouns can be treated the same. Whether the noun is an image, text, video, an mp3, a slideshow, whatever. I can GET all of those things the same way given a URL.

Wife: Sounds like GET is a pretty important verb.

Ryan: It is. Especially when you’re using a web browser because browsers pretty much justGET stuff. They don’t do a lot of other types of interaction with resources. This is a problem because it has led many people to assume that HTTP is just for GETing. But HTTP is actually ageneral purpose protocol for applying verbs to nouns.

Wife: Cool. But I still don’t see how this changes anything. What kinds of nouns and verbs do you want?

Ryan: Well the nouns are there but not in the right format.

Think about when you’re browsing around amazon.com looking for things to buy me for Christmas. Imagine each of the products as being nouns. Now, if they were available in a representation that a machine could understand, you could do a lot of neat things.

Wife: Why can’t a machine understand a normal web page?

Ryan: Because web pages are designed to be understood by people. A machine doesn’t care about layout and styling. Machines basically just need the data. Ideally, every URL would have a human readable and a machine readable representation. When a machine GETs the resource, it will ask for the machine readable one. When a browser GETs a resource for a human, it will ask for the human readable one.

Wife: So people would have to make machine formats for all their pages?

Ryan: If it were valuable.

Look, we’ve been talking about this with a lot of abstraction. How about we take a real example. You’re a teacher – at school I bet you have a big computer system, or three or four computer systems more likely, that let you manage students: what classes they’re in, what grades they’re getting, emergency contacts, information about the books you teach out of, etc. If the systems are web-based, then there’s probably a URL for each of the nouns involved here: student, teacher, class, book, room, etc. Right now, getting the URL through the browser gives you a web page. If there were a machine readable representation for each URL, then it would be trivial to latch new tools onto the system because all of that information would be consumable in a standard way. It would also make it quite a bit easier for each of the systems to talk to each other. Or, you could build a state or country-wide system that was able to talk to each of the individual school systems to collect testing scores. The possibilities are endless.

Each of the systems would get information from each other using a simple HTTP GET. If one system needs to add something to another system, it would use an HTTP POST. If a system wants to update something in another system, it uses an HTTP PUT. The only thing left to figure out is what the data should look like.

Wife: So this is what you and all the computer people are working on now? Deciding what the data should look like?

Ryan: Sadly, no. Instead, the large majority are busy writing layers of complex specifications for doing this stuff in a different way that isn’t nearly as useful or eloquent. Nouns aren’t universal and verbs aren’t polymorphic. We’re throwing out decades of real field usage and proven technique and starting over with something that looks a lot like other systems that have failed in the past. We’re using HTTP but only because it helps us talk to our network and security people less. We’re trading simplicity for flashy tools and wizards.

Wife: Why?

Ryan: I have no idea.

Wife: Why don’t you say something?

Ryan: Maybe I will.

How I Explained REST to My Wife

Ryan Tomayko

Sunday, December 12, 2004

Translations of the following dialog available in JapaneseFrenchVietnameseItalianSpanish,Portuguese, and Chinese. Huge thanks to YAMAMOTO YoheiKarl DubostjishinBarbzTordekEdgard Arakakikeven lw, respectively. If you know of additional translations, please leave a comment with the location.

Buzzword Compliance at SMPTE/NAB/CinemaCon

The fundamentals of Digital Cinema are built upon Open Source tools, in particular Motion-JPEG (instead of the license troubled MPEG world) and AES-128 encryption (instead of any number of private systems) as well as PCM Wave coding for audio. The combined reasoning of avoiding license fees and allowing the technology to flow by inhibiting the restrictions that proprietary tools bring makes sense.

Now, an adjunct technology iis being held under the same scrutiny and one suspects that the reason is Marketing. Clever marketing, since this is a confused market, but marketing nonetheless. One of the first thing that one learns about standards is that they can be inhibiting and destructive in many circumstances.

The exhibitors want two things. They want to differentiate themselves by keep giving perks and higher quality in special circumstances. This means that they will buy innovation.

But they also want some security that the equipment that they buy won’t turn out to be something that they can’t use in a few years. To many the later translates into “Come On Guys, Can’t You Work Together?” Hey~! Open Source.

Whether Open Source is something the industry wants in its secondary products needs some scrutiny and education. There also has to be some recognition of the enormous amounts of investment that goes into hardware designs and accommodating capabilities not yet dreamed of. 

What is being heard now is Open Something. Open Source is bandied about, then licensing is tied to usage to become something else. 

=-=-=This will be updated as the players find ways to answer to their stockholders…or find another way to announce their firstiness.

Buzzword Compliance at SMPTE/NAB/CinemaCon

The fundamentals of Digital Cinema are built upon Open Source tools, in particular Motion-JPEG (instead of the license troubled MPEG world) and AES-128 encryption (instead of any number of private systems) as well as PCM Wave coding for audio. The combined reasoning of avoiding license fees and allowing the technology to flow by inhibiting the restrictions that proprietary tools bring makes sense.

Now, an adjunct technology iis being held under the same scrutiny and one suspects that the reason is Marketing. Clever marketing, since this is a confused market, but marketing nonetheless. One of the first thing that one learns about standards is that they can be inhibiting and destructive in many circumstances.

The exhibitors want two things. They want to differentiate themselves by keep giving perks and higher quality in special circumstances. This means that they will buy innovation.

But they also want some security that the equipment that they buy won’t turn out to be something that they can’t use in a few years. To many the later translates into “Come On Guys, Can’t You Work Together?” Hey~! Open Source.

Whether Open Source is something the industry wants in its secondary products needs some scrutiny and education. There also has to be some recognition of the enormous amounts of investment that goes into hardware designs and accommodating capabilities not yet dreamed of. 

What is being heard now is Open Something. Open Source is bandied about, then licensing is tied to usage to become something else. 

=-=-=This will be updated as the players find ways to answer to their stockholders…or find another way to announce their firstiness.