Acciones - Quejas
  • Register

Multifilar antennas target improved autonomous performance

By Oliver Leisten
Technical Director, Helix Technologies Ltd.

To attain the 10-centimeter accuracy required for autonomous vehicle positioning within urban multipath propagation conditions, there is a need for a significant upgrade in GNSS antenna performance. The autonomous vehicle application demands excellent antenna performance together with exploitation of the full set of GNSS multi-frequency and multi-constellation system advances to deliver this performance paradigm in the most severe of real-world use scenarios.

Given that an antenna necessarily operates in open fields, it follows that field resonance must be managed to provide predicable performance in diverse use-scenarios. A new antenna developed by Helix Technologies (Figure 1) deploys balanced fields across a cylindrical ceramic dielectric core to constrain the outreach of resonance fields and thereby minimize the interaction with nearby objects. The antenna feed is designed to provide enforcement of balanced operation, which ensures that the antenna resonates predictably and independently of the platform (i.e., the vehicle in the case of autonomous driving). Thus, the operation is not significantly influenced by the mechanical or material properties of the platform or housing. This architecture provides isolation from common-mode signals and protects the GNSS signals from conducted interference.

Figure 1. Features of the hexafilar-turnstile solution for multi-frequency GNSS.

It is challenging to configure a GNSS antenna operating at many frequencies in which the performance at any one frequency is not impaired by mode interactions. Such impairments can have serious consequences for the position accuracy in an urban environment because they adversely affect the cross-polar discrimination: a parameter which is most important for eliminating multipath positioning errors. The architecture of the hexafilar-turnstile antenna has overcome this problem and delivers the circular polarization pattern characteristics illustrated (simulated data) in Figure 2.

Figure 2. Simulated RH circular polarized patterns at GPS L1 (left) and GPS L2 (right).

The figure demonstrates that the antenna is forming cardioid patterns at two frequencies. The 3D graphic is intended to show the omni-directionality and the 2D elevation cuts exhibit the signature cardioid shape which characterize a “spinning-dipole” circular polarization antenna.

It is often suggested that patterns of wide beam-width such as these would not be particularly suitable for positioning in urban canyons where the sky can only be seen in a relatively small solid angle. In fact, the ratio of front-to-back gain is strongly associated with the cross-polar discrimination that is important for position accuracy in urban environments. Patterns of this quality can deliver as much as 30-dB of signal-to-interference advantage in favor of the direct-path satellite signals against signals whose polarization has reversed due to multipath reflection.

Helix Technologies is developing antennas which have two-pole frequency responses that provide two frequencies of optimum cross-polar discrimination that are aligned to the two frequencies of maximum spectral density of an M-BOC or Alt-BOC coded signal, as transmitted by the modern GPS and Galileo satellites respectively. These antennas should be available for test and evaluation in Q2 of 2018.


http://www.GPSWORLD.com
Viernes 09 de Febrero del 2018

Ask an artificially intelligent question…

There was plenty for a philosophy major to sink his teeth into at ION’s January workshop on Cognizant Autonomous Systems for Safety Critical Applications (CASSCA).

What is knowledge? What is meaning? What is understanding? What is intelligence? What is learning? What is thinking?

These questions excited Plato and Kant, Buddha and Descartes, perhaps out of intellectual or spiritual curiosity. Who’s to say? But the people asking them now are driven, quite literally, by practicalities. They have come to realize that we cannot ride in driverless cars or fly in pilotless plane-taxis, we cannot live in an autonomous, artificially intelligent environment without knowing a bit more exactly what knowledge is, in this brave new world.

Without thinking about what thinking may be, for a machine.

Why does this matter to a GPS/GNSS/PNT readership? Because as positioning and navigation engage more deeply with artificial intelligence (AI) generally, and with autonomy in particular, these issues emerge as part of the environment that such solutions explore, and in which they must verify and validate themselves.

Welcome to the future, it’s yours. Now think about it.

Culture Club. Some of us may have believed that only technical obstacles remain in the path of a driverless car and an otherwise automated society, salted with a few regulatory wrinkles to iron out. But as build-a-robot R&D projects transform into full commercial partnerships, cultural challenges jump up as well: inertia, instability of requirements, unanticipated expectations, magical thinking (the development of empathetic attitudes towards robots), misplaced trust and misplaced distrust. All this according to Signe Redfield, roboticist and mission manager at the U.S. Naval Research Laboratory.

Joao Hespanha, professor of electrical and computer engineering at the University of California, Santa Barbara, outlined three key concepts for AI development: computation, perception and security. The critical questions for the first named are, how much computing will be done onboard the platform, how much learning will be done onboard, and how much of each process will be distributed to offboard computation. Perception, a crux for autonomy, is closely bound in a feedback loop with control. The platform must gather data to make autonomous decisions (control), and those decisions must maximize the gathering of information (perception).

Amply consider security. All safety-critical systems must provide for — and prevent where possible — decisions based on compromised measurements, which may stem from system or environmnetal noise, sensor faults, hacked sensors, or other corruptions.

 Second Wave. We are in the second wave of AI, according to Steven Rogers, senior scientist for sensor fusion at the Air Force Research Laboratory. In the first wave, 60s and 70s, large and complex algorithms, relatively low on data, drove new developments — but they hit real-world problems, hard. Since the mid-80s, we have been in the “classify” stage with relatively simpler programs generating and consuming lots of data. Intense statistical learning will eventually lead to the third wave of AI: Explain.

On a timeline yet to be determined, contextual adaptation will give rise to “explainable” AI, capable of answering unexpected queries. That is, it will have learned how to teach itself.

Some of this stuff gets pretty scary.

Most future knowledge will be machine-generated.

Let’s run through that one more time.

“Most future knowledge on Earth will come from machines extracting it from the environment,” said Rogers. “Machine generation of knowledge is key for autonomy.”

Here’s where the thought processes really started to levitate. “Current sense-making solutions are not keeping pace, not growing as knowledge is growing,” Rogers asserted. And he challenged us with the questions posed at the beginning of this column: in AI, the context we will use to explore much of the future, what is knowledge? What is meaning? And so on.

He gave us one of his answers: “Knowledge is what is used to generate the meaning of the observable for an autonomous system. Correspondingly, machine-generated knowledge is what is used to turn observables into machine-generated meaning.”

Autonomy Knowledge.Rogers

Slide from Steven “Cap” Rogers’ presentation at CASSCA.

He suggested a book by George Lakoff and Mark Johnson, Metaphors We Live By. Pretty heady stuff for a room full of engineers. I don’t know about you. I’m headed down to the library to check it out.

Requirements, Simple/Not. We got back to earth with some technical challenges we could actually chew on with David Corman, program manager for Cyber-Physical Systems and Smart and Connected Communities at the National Science Foundation. Seemingly simple requirements for safety-critical applications break down into hundreds of requirements that no one has really thought about, Corman said, as he displayed a chart of “Some Example Research Problems.”

Precision agriculture and environmental monitoring are two sectors where he thought autonomous operations come closest to being full realization, because their operational environments are structurally defined enough. In such constrained niches that we more fully understand, we can implement autonomous operations. Elsewhere, “we don’t know how to specify what we want, so that we get only ‘good results’ and no ‘bad results.’ ”

He identified a looming Cambrian explosion in AI, analogous to that for plants and animas following the dinosaur extinction, in which systems interact, gather data, sense the environment, learn, improve and multiply. He suggested we browse “The Seven Deadly Sins of Predicting the Future of AI,” an essay by Rodney Brooks.

The afternoon’s workshop talks followed, from experts in autonomous flight software, legal and insurance aspects of autonomy, the Ohio State University’s Center for Automotive Research, and the U.S. Department of Transportation. But I tell you, this morning done my brain in.

Before folding up, I must mention a short video on autonomous flying taxis displayed by Paul DeBitetto, VP of software engineering at Top Flight Technologies. It depicts Pop.Up, a modular ground and air passenger vehicle for megacities of the future. Check it out.

The CASSCA workshop was organized and moderated by Zak Kassas, an assistant professor at the University of California, Riverside and director of the Autonomous Systems Perception, Intelligence & Navigation (ASPIN) Laboratory. He is also co-author of two cover stories in GPS World, “LTE cellular steers UAV” and “Opportunity for Accuracy.”

ION president John Raquet expressed the hope that we may see a fully fledged conference on this topic in the near future: CASSCA 2019, perhaps, to join the rotating repertory of ION annual meetings.

Agreed. We need to think more.

Don’t look back, the machines may be gaining on us.


http://www.GIM-INTERNACIONAL.com
Lunes 05 de Febrero del 2018

Dutch UAV First to Map Remote Tropical Island in 50 Years

Getting better all the time

 

The interior region of Silhouette is a national park, with some of the richest biodiversity in the entire Indian Ocean. It is also home to several, critically endangered, plants and animals. In order to keep track of the different species on the island and gain insights into the unique ecosystem, an up-to-date map is an essential tool. However, this was not possible for half a century due to its rough environment with mountaintops up to 740m and its remote location.

As of today, the outdated and inaccurate map is the only means of planning and guiding expeditions. During trips on the island, one can encounter cliffs or impenetrable areas that are not shown on the current map, forcing to abort the mission prematurely, conservation officer at the Island Conservation Society, François Baguette explained. The high-resolution photographs and maps made by Marlyn will help them to better understand this area and will enable them to better plan expeditions, saving a lot of time and valuable manpower, he added.

Challenging mapping circumstances

With this project, ATMOS UAV is pursuing the vision of the company: empowering professionals across industries to effortlessly gather geospatial data from the sky, enabling them to make more informed decisions, more efficiently and effectively, mentioned Sander Hulsman, CEO of ATMOS UAV. The ability of the Marlyn UAV to withstand winds up to 6Bft during all phases of the flight from take-off to cruise and landing makes it a unique professional mapping tool that proved to be essential for this project. Hulsman added his company is very proud that TFC International selected their flagship model Marlyn for this beautiful and challenging project.

The combination of having only a handful of small take-off points in combination with the mountains, ever-present wind and heat makes mapping the Silhouette island a big challenge. Conventional UAVs that fly like helicopters lack the endurance and fixed-wing drones need a large empty space to take-off and land which is not available. Jean-François Rossignol, director of TFC International, said when they found out about Marlyn’s ability to take-off and land vertically under windy conditions combined with a high flight autonomy, they knew this was exactly the drone they needed for this wonderful project.


http://www.GIM-INTERNACIONAL.com
Lunes 05 de Enero del 2018

3D at Depth Reaches Milestone in Offshore Lidar Metrologies

3D at Depth is the world’s leading expert in subsea Lidar technology and first introduced subsea Lidar metrologies for the offshore market 3.5 years ago in the Gulf of Mexico. This 300th metrology marks a significant milestone in the offshore survey market, as customers continue to see the clear advantages of subsea Lidar laser metrologies over traditional laser scanning and conventional methods.

Joint presentation at Subsea Expo

3D at Depth is excited to have reached this milestone, stated Neil Manning, chief business development officer. The Company has been committed to innovation and collaboration with oil and gas majors from the inception. As a result, this relationship has produced an IMCA-approved process for the utilisation of their technology. Depth will be helding a joint presentation with Shell UK at Subsea Expo on 7 February at the Aberdeen conference center (AECC) between 14:00 and 16:30. The presentation is called ‘The New Metrology’ and focuses on a collaboration project at the Gannet C installation.

Data collection and processing

To date, the Company has conducted metrology projects in the US, Europe, Mediterranean and Asia and has operated in up to 4,000 meters in depth. Each 3D at Depth subsea Lidar (SL) laser system is powered by proprietary point cloud processing software with in-house patented technology and a customised optical design. The success of 3D at Depth’s metrology process is based on the unique features of subsea Lidar technology which includes touchless, total beam control scans, multi-return Time of Flight (TOF) data formats and ranges up to 45 meters during the data collection process.

From a client deliverable standpoint, every 3D at Depth Lidar metrology delivers precise, repeatable, millimetric 3D point cloud data sets that can be easy imported into any existing GIS or CAD-based platform, providing an unparalleled baseline of subsea assets at the point of installation. (Reflectivity and longer-range measurement repeatability are unique features attributed to 3D Lidar data sets and are essential for more precise 3D modelling, analysis and visualisation.)

Applications

3D at Depth’s metrology projects have provided critical information in the areas of cost avoidance, risk mitigation and asset integrity. Subsea Lidar data sets provide a variety of 3D intelligence beyond metrology. These applications include validating as-built design data vs. the actual structure design for asset integrity; generating physical 3D 1:1 printed scale models for wellhead fabrication, and quantifying potential risk hazards in the areas of vibration and subsidence. The Company is developing additional Lidar technology applications in the areas of leak detection, decommissioning, new field development and non-touch subsidence and asset condition monitoring.

The Company made recent news with the development of an immersive collaboration 3D Lidar VR Platform “Powered by IQ3”  that connects multiple users and key decision makers to their 3D subsea data via any laptop, desktop or smart device through a secure web portal.

3D at Depth will be conducting workshops and demonstrations during Oceanology International 2018 in London at Stand G250. Visitors who participate in the workshop will receive a pair of 3D goggles.


http://www.GIM-INTERNACIONAL.com
Lunes 05 de Febrero del 2018

Innovation: The continued evolution of the GNSS software-defined radio

Getting better all the time

In this month’s column, we review the history and future of software-defined radios (SDRs), looking in particular at GNSS SDRs.

This online version of the print article includes two bonus sections for which there wasn’t room in the magazine: New Frontiers: GNSS SDRs in Space and The Economics of SDRs.

By James T. Curran, Carles Fernández-Prades, Aiden Morrison and Michele Bavaro

INNOVATION INSIGHTS with Richard Langley

I had a fairly normal childhood—as a nerd. I was interested in radio and so was my sister. For her, it was the local AM radio stations where she could hear the latest Beatles’ hits on her six-transistor handheld portable. But for me, it was shortwave radio. I received a Knight-Kit two-tube regenerative shortwave receiver for Christmas 1963 when I was 14. It used one tube for the RF section and one tube for the audio amplifier. Using a random-length antenna above my mother’s clothesline, I was able to log radio stations from more than 100 countries during my high-school days.

With the pressures of university studies and starting to work for a living, I put my radio hobby on hold. But on an Air Canada flight to a conference early in 1985, I spotted an advertisement in the inflight magazine for the diminutive Sony ICF-7600D portable shortwave receiver — the height of miniaturization of microprocessor-controlled receivers at the time — and I acquired one in Hong Kong in May of that year before starting a lecture tour in the People’s Republic of China. I used the Sony receiver extensively at home and on trips overseas and heard many interesting broadcasts over the years including President Gorbachev’s resignation speech live from Radio Moscow.

Fast forward to 2013, when I purchased my first software-defined radio (SDR) receiver, a FUNcube Dongle Pro+, with frequency coverage from longwave up to the L-band. Interfaced via USB to a computer and bespoke software, an SDR receiver allows one to monitor a wide swath of the radio spectrum or record it for future analysis as in-phase and quadrature components. I have since acquired several other SDR receivers, and the capability of these units keeps getting better and better, delighting me and my fellow radio hobbyists. But these improvements in SDR technology extend to other uses of the radio spectrum including GNSS. In this month’s column, we review the history and future of SDRs looking in particular at GNSS SDRs. And what the Beatles said about improving one’s nature as a human being also aptly describes the performance of SDRs: it’s getting better all the time.


The software-defined radio (SDR) has an infinite number of interpretations depending on the context for which it is designed and used. By way of a starting definition, we choose to use that of a reconfigurable radio system whose characteristics are partially or fully defined via software or firmware. In various forms, the SDR has permeated a wide range of user groups, from military and business to academia and the hobby radio community.

SDR technology has evolved steadily over the decades following its birth in the mid-1980s, with various surges of activity being generally aligned with new developments in related technologies (processor power, serial busses, signal processing techniques and SDR chipsets). At present, it appears that we are experiencing one such surge, and the GNSS SDR is expanding in many directions. The proliferation of collaboration and code-sharing sites such as GitHub has enabled communities to share and co-develop receiver technology; the rise in the maker-culture and crowdsourcing has led to the availability of high-performance radio-frequency (RF) front ends; and the adoption of SDRs by some major telecommunications companies has led to the availability of suitable integrated circuits.

These contributing factors have played a part in an increased uptake of GNSS SDRs in military, scientific and commercial applications. In this article, we explore the recent trends and the technology behind them.

SDR TOPOLOGIES

The software-defined radio for GNSS has evolved over the past decade, both in terms of the adoption of new frequencies, new signals and new systems, as they have become available; as well as the adoption of new processing platforms and their associated processing techniques. Shown in FIGURE 1 is a (simplified) depiction of how the topology of the software-defined GNSS receiver has evolved over the years (a–d) with a hint at where it might go next (e, f).

FIGURE 1. A simplified depiction of different SDR topologies (GPP = general-purpose processor, GPU = graphics processing unit, FPGA = field-programmable gate array, SoC = system on chip, RFSoM = radio-frequency system on module, RFSoC = radio-frequency system on chip).

In a traditional GNSS SDR, as depicted in Figure 1 (a), the RF front end typically interfaces with the general-purpose processor (GPP) through a standard bus, and intermediate-frequency (IF) samples are streamed to a buffer. Once on the GPP, basic operations such as correlation, acquisition/tracking, measurement generation and positioning were performed.

Of all of the operations performed by a GNSS receiver, correlation is (by some orders of magnitude) the most computationally intensive. However, the correlation operations are relatively simple, often requiring only integer arithmetic, and can be easily parallelized. When running on modern processors, optimized software receivers can avail themselves of multi-threading (task parallelism) or the operations can be vectorized to exploit data parallelism (single-instruction, multiple data).

Beyond a certain number of GNSS signals and a certain bandwidth, a GPP simply cannot cope, and many SDR receivers looked to hardware acceleration for the correlation process. This either took the form of a graphics processing unit (GPU), or a field-programmable gate array (FPGA), as depicted in Figure 1(b), both of which are well suited to highly parallel tasks. These processing platforms can be powerful and efficient, and so can almost alleviate all challenges associated with correlation. This is not the only way to alleviate the processing burden, as it is also possible to delegate the correlation task to a network of computers. This “cloud” receiver architecture, depicted in Figure 1(e), has received particular attention of late, showing promise for certain niche applications. This computation-in-the-cloud trend has partially reverted with the proliferation of many-core desktop and mobile processors, but at a certain level of signal or processing complexity, the extensions remain applicable.

Nowadays, data throughput becomes an important consideration. When considering multi-constellation, multi-frequency receivers, the objective is often to preserve signal quality, which implies high bandwidth and high digitizer resolution. A triple-frequency front end might easily produce in excess of 100 or even 500 megabytes per second. When this data is delivered to the GPP or somewhere in the host computer, and then offloaded to the GPU (or any other hardware accelerator), it might be handled twice, exacerbating the bottleneck. To overcome this problem (and for other practical architectural reasons) it can be preferable to interface the front end directly with the accelerator, where correlation was performed, and leave the brains of the receiver (including loop closure; data processing; and position, velocity and time computation) on the GPP. This is a particularly convenient approach when using an FPGA accelerator, as shown in Figure 1(d).

A similar architecture can be achieved using modern system-on-chip (SoC) integrated circuits (ICs), which can offer a large FPGA and a powerful GPP on the same piece of silicon, as depicted in Figure 1 (d). Indeed, a number of receivers using this architecture have seen commercial and scientific success, having many of the benefits of dedicated silicon while retaining the benefits of the software-defined radio (for example, the Swift Navigation Piksi Multi GNSS Module). Recent developments in the field have seen the world’s first RF system-on-module (RFSoM) or system-on-chip (RFSoC) devices, targeting 5G mobile communications applications. With an architecture similar to that of Figure 1(f), the IC touts up to eight inputs and eight outputs (8×8) multiple input, multiple output (MIMO) with 12-bit analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) running at rates of 2/4 gigasamples per second. Depending on how this trend evolves (assuming lighter versions become available), this might offer an exciting new platform for GNSS SDRs, simultaneously capable of multi-frequency and multi-antenna operation.

RF HARDWARE: THE ENABLER

GNSS SDRs see the world through a hardware peripheral, and the capability of this hardware defines the perimeter between what the receiver can and cannot do. In essence, the front-end peripheral converts one or more analog RF signals at the antenna to a stream or sequence of packets of digital-baseband/IF data to the GPP.

A software-defined radio for GNSS benefits greatly from being flanked in the RF spectrum on both sides by signals that are of interest to the civilian population. Applications such as Digital Video Broadcasting — Terrestrial (DVB-T) and Digital Video Broadcasting — Satellite Second Generation (DVB-S2) receivers have resulted in the availability of a wide range of low-cost RF ICs that are tunable to GNSS frequencies (typically spanning from 900 MHz to 2.1 GHz), which, along with dedicated GPS ICs, were at the heart of early GNSS SDR front ends. Later developments in ICs designed around the 2/3/4G mobile communications standards brought another generation of ICs, bringing higher instantaneous bandwidth, higher ADC resolution and MIMO, and re-transmit capability. With the increase in popularity of the software-defined radio for cognitive radio, Wi-Fi, 3G and Long-Term Evolution or LTE, and enjoying the benefits of a crowdfunding movement, a wide range of front-end peripherals quickly appeared. Many of these front ends are compatible with GNSS, offering significantly increased performance relative to their predecessors. A selection of some GNSS-compatible SDR peripherals (both new and old) is shown in TABLE 1.

TABLE 1. A selection of GNSS-compatible SDR front ends (Half duplex = transmit and receive but not simultaneously; Full duplex = transmit and receive simultaneously).

Reference Oscillators. Although many of the requirements of modern telecommunications ICs are beyond what is needed for GNSS (such as ADC resolution, frequency range, bandwidth and linearity), clock stability is often inadequate. Communications signals are generally received at high signal-to-noise ratio so the carrier can be easily recovered, even given very poor clock stability.

In contrast, clock stability can be critical for GNSS applications, due to the required comparatively long coherent integration period (greater than 1 millisecond) for a couple of reasons. Firstly, because the search-space granularity is related to the integration period and the size of the search space to the frequency uncertainty, clock accuracy is important, as an uncertainty of some tens of kHz might increase acquisition time. Secondly, the short-term stability is important as a large degree of phase wander can be challenging when attempting to track the carrier phase with a loop-update rate below 1 kHz. In fact, this issue was so pronounced on early RTL-SDR DVB-T front ends, that later revisions upgraded the quartz reference oscillator to a more respectable 0.5 parts per million temperature-compensated crystal oscillator (TCXO). Typically, a TCXO with an accuracy of better than 1 part per million is preferable, but this metric alone is far from sufficient.

Depending on the class of signals for which the SDR front end will be used, the characteristics of the oscillator, the configuration of its support electronics, and even whether the mixers and analog-to-digital conversion process use the same reference can vary. For example, not all TCXOs are suitable for GNSS applications due to the way in which they internally apply their temperature compensations. If a given TCXO uses a stepwise compensation configuration based on any form of digital feedback, the size of the resulting steps can severely impact the GNSS tracking loops. Even if a given TCXO has a suitable compensation curve and implementation, as well as low and acceptable intrinsic phase noise, every other link in the clock chain must preserve this performance. In some front-end implementations, swapping out a low-quality clock for a higher quality one is sufficient, but in others there can be design limitations in the oscillator power supply, the oscillator signal conditioning, subsequent clock generation steps, or distribution routing that can prevent the design from ever being suitable for GNSS use. This can be critical in cases where the carrier phase is of interest, for example, where phase coherence between channels is important for multi-frequency linear combinations, or for multi-antenna systems.

Fortunately, many modern SDR front ends support the use of an external clock. This feature can also be important when attempting to combine two front-end peripherals to effect a dual-frequency or dual-antenna software receiver.

The Bus. An intrinsic bottleneck for any SDR system is the fact that some form of connection or bus is needed to carry data from the collection point to the processing element. In a fully integrated system, this connection still exists, but it is typically a trace on a circuit board or even a pathway within an integrated device. In contrast, in an SDR this often takes the form of a cable or connector between the physically discrete system modules. In cases where the devices are discrete, it is often necessary to implement some data buffering on both ends of the bus.

The suitability of a particular bus is often determined by the sustained data throughput rate required by the application and, in some cases, the latency of the bus. An example of a number of interfaces popular in modern SDR front ends is shown in FIGURE 2, illustrating the nominal throughput and the minimum latency of each. In the case of a GNSS SDR, the minimum conceivable throughput required would be hundreds of megabytes per second, but a system could easily use in excess of 200 megabytes per second for multi-frequency, high-bit-depth data.

Of course, in post-processing applications, bus latency is not a factor. However, certain applications may require that this latency is small, or bounded, or somehow deterministic. Applications such as closed-loop vehicle control or certain safety systems might impose tight requirements on latency. High or unpredictable latency in GNSS measurements might lead to loop instability, in the case of a control system, or might erode safety margins. Although the trend in modern interfaces is for higher throughput, only certain interfaces offer low latency.

FIGURE 2. Bandwidth vs. latency scatter plot for popular buses.

The Silicon. In comparison with less-flexible fixed-function GNSS receiver chips, GNSS SDR hardware platforms provide the opportunity to exchange one to three orders of magnitude of power consumption and system size to gain substantial control over the characteristics of the design. Moreover, one of the other main differences between GNSS front ends and general purpose SDR front ends is the number of bits of ADC resolution and the conversion linearity. Both contribute to power consumption. However, it may be worth considering that GNSS-specific front ends have not received as much attention as telecommunications front ends and, consequently, there is at least a generational gap in silicon mask technology (most GNSS products are at the 350-nanometer level).

In terms of GNSS-specific devices, products such as the SiGe SE4110L, the Maxim MAX2769 and Saphyrion’s SM1027U provide a solution for slightly flexible L1 GPS, Galileo or, in some chip revisions, GLONASS operation. These kinds of chips support a few sampling rates and filtering configurations.

In the middle ground are the much more flexible chips from Maxim including the MAX2120 and MAX2112, which provide total L-band coverage, a myriad of filtering options, and adjustable gain control, all within a 0.3-watt power budget per channel (RF portion only). These chips allow for single-band coverage of adjacent GNSS signals such as GPS and GLONASS L1 or L2 in a single non-aliased RF band.

In terms of multi-channel options, devices such as the Maxim MAX19994A or the NTLab NT1065 offer dual- or quad-channel functionality, respectively. Similar functionality can be achieved by pairing downconversion and IF receiver ICs such as, for example, the Linear Technologies LTC5569 dual-active downconverting mixer and the Analog Devices AD6655 IF receiver, which might offer sufficient performance for high-accuracy dual-frequency positioning.

Higher up the cost, power and complexity structure are radios designed explicitly to support SDR applications that happen to cover GNSS bands such as the Lime LMS6002d/LMS7002M and the Analog Devices AD9364. Notably, these provide receive and transmit channels and frequency coverage up to 6 GHz.

Another interesting and relevant trend is in the use of direct RF sampling ICs, which offer the possibility of full L-band coverage and multi-antenna support. Examples include the Texas Instruments ADS54J40, which offers a dual-channel, 14-bit, 1.0-gigasamples-per-second ADC, or the LM97600 offering a 7.6 bit, quad-channel, 1.25-gigasamples-per-second ADC.

Future Trends, Limitations and Opportunities. Most of the innovation in SDR peripherals has taken place in the telecommunications domain. The GNSS SDR community, being comparatively small, has benefited from these innovations, insofar as they were applicable, but has had little influence over their design.

Looking at the bigger picture, it is clear that GNSS SDRs will simply have to follow the road paved by telecommunications SDRs. We will have to use what is made available, and so future trends in GNSS SDRs will likely be driven by the needs of the telecommunications SDR community.

So what are these trends and will they be aligned with GNSS trends? The answer seems to be yes and no. One of the bigger trends in modern GNSS receivers is the move to dual- or multi-frequency and a second trend is towards multi-antenna receivers for attitude determination or multi-element antennas for interference management. Meanwhile, telecommunications applications are almost universally using MIMO transceivers; however, they don’t seem to be using multiple (simultaneous) carriers.

What is particularly interesting is that the requirements for a MIMO transceiver are well aligned with that of a null-steering GNSS antenna: namely high linearity and high ADC resolution, and phase-coherence between channels (provided by, for example, the Lime Microsystems LMS7002M or the Analog Devices AD9361). As a result, it is possible (or even likely) that in the near future we will see more innovation in GNSS SDRs in the area of multi-antenna processing than in multi-frequency processing.

Signal Processing Techniques for SDRs. As mentioned above, signal correlation for acquisition and tracking is the most computationally intensive operation conducted by a GNSS receiver. In software receivers, many signal acquisition strategies are built around the fast Fourier transform (FFT) algorithm with a signal tracking rake of three or more correlators per signal. When targeting real-time processing, these operations need to be applied to a stream of signal samples arriving at a rate of many megasamples per second. This is a challenge for GPPs when implementing a multi-constellation, multi-frequency GNSS receiver.

The processing task can either be alleviated or accelerated. Assistance data can allow the receiver to reduce the size of the search acquisition space, thereby dramatically reducing the overall computational load. In many cases, the software receiver is running on a host computer with many connectivity options. Alternatively, a variety of options are available for accelerating the tasks.

Parallelization. The main approach for accelerating GNSS signal processing is parallelization. Shared-memory parallel computers can execute different instruction streams (or threads) on different processors, or by interleaving multiple instruction streams on a single processor (simultaneous multithreading or SMT), or both. This approach is referred to as task parallelism, and it is well supported by the main programming languages, compilers and operating systems. This approach fits naturally with the architecture of a GNSS receiver, which has many channels (one per satellite and frequency band) operating in parallel over the same input data. When programmed with the appropriate design, execution can be accelerated almost linearly with the number of processing cores. However, the spreading of processing tasks along different threads must be carefully designed in order to avoid bottlenecks (either in the processing or in memory access).

In combination with task parallelization, software-defined receivers can still resort to another form of parallelization: instructions that can be applied to multiple data elements at the same time, thus exploiting data parallelism. This computer architecture is known as Single Instruction Multiple Data (SIMD), where a single operation is executed in one step on a vector of data, as illustrated in FIGURE 3.

FIGURE 3. Illustration of the operation of single-instruction multiple-data (SIMD) processors, which take a multiple-data input (arguments) and produce multiple results, given a single instruction operated in parallel in a set of processing units (PUs).

In GNSS receivers, this type of instruction can implement operations like multiply-and-accumulate across multiple (16, 32, 64 and so on) samples in a single clock cycle. Intel introduced the first instance of 64-bit SIMD extensions, called MMX, in 1997. Later SIMD extensions, SSE 1 to 4, added multiple 128-bit registers. AMD quickly followed and SIMD is now present in almost all modern processors.

Later, Intel introduced more new instruction sets called Advanced Vector Extensions (AVX) featuring 256-bit registers, new instructions and a new coding scheme. In 2013, AVX-2 expanded most integer commands to 256 bits and by 2016, the introduction of AVX-512 provided 512-bit extensions. SIMD technology is also present in embedded systems: NEON technology is a 128-bit SIMD architecture extension for the ARMv7 Cortex-A series processors, providing 32 registers, 64-bits wide (dual view as 16 registers, 128-bits wide), and AArch64 NEON for ARMv8 processors, which provides 32 128-bit registers. In many cases, well written code will be automatically implemented as some combination of these SIMD intrinsics. In other cases, they can be coded explicitly.

Hardware Acceleration. Another possibility for accelerating signal processing is to offload computation-intensive portions of the workload to a device external to the main GPP executing the software. This is the case of graphics processing units (GPUs). Such processor architecture follows another parallel programming model called Single Instruction, Multiple Threads (SIMT). While in SIMD elements of short vectors are processed in parallel, and in SMT instructions of several threads are run in parallel, SIMT is a hybrid between vector processing and hardware threading. Currently, Open Computing Language or OpenCL is the most popular open GPU computing language that supports devices from several manufacturers, while CUDA (originally, Compute Unified Device Architecture) is the dominant proprietary framework specific for Nvidia GPUs. The key idea is to exploit the computation power of both GPP cores and GPU execution units in tandem for better utilization of available computing power. The main constraint in using GPUs is memory bandwidth. If not programmed carefully, most of the time will be spent on transferring data back and forth between the GPP and the GPU, instead of in the actual processing. A possible solution to this is an approach known as zero-copy operations, which consists of a unified address space for the GPP and the GPU that facilitates the passing of pointers between them, thus reducing the memory bandwidth requirements.

Similar benefits can be had by offloading correlation to reconfigurable hardware such as  FPGAs. The correlation duties can be offloaded to an FPGA and the loop-closure and navigation engine can remain in the GPP. The FPGA is particularly well suited to the GNSS correlation tasks and can implement dedicated low-resolution (such as 1-4 bit) multiply-and-accumulate blocks, where the equivalent 8-, 16- or 32-bit operations on a GPP would be excessive or inefficient. Early approaches involved an FPGA connected as a peripheral device via Ethernet, Peripheral Component Interconnect Express (PCIe) or a similar bus. However, similar to the GPU, the data transfer quickly becomes a bottleneck. This challenge is addressed by integrating the GPP-FPGA packages. An early example of this approach was the Intel Atom E6x5C package hosting an Altera FPGA. More recent examples are Xilinx’s Zynq 7000 family integrating ARM and FPGA processors in a single encapsulation. These SoCs allow the direct injection of signal samples from the RF front end into the FPGA, greatly reducing the amount of information to be interchanged with the GPP. This approach provides flexibility with regard to how tracking and correlation resources are allocated, allowing configurable architectures according to the targeted signals of interest and application at hand, and enabling the execution of full-featured software-defined receivers in small form factor devices.

THE CLOUD

The ability to manage resources as logical entities instead of as physical, hardwired units dedicated to a given application has materialized in business models such as Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructures as a Service (IaaS). A network of software-defined GNSS receivers executed in the cloud, appears to be the next natural step in this technology trend, in which the GNSS receiver is no longer a physical device but a virtualized function provided as a service (see FIGURE 4).

Inn Fig4

FIGURE 4. Illustration of the cloud-based GNSS signal-processing paradigm. (Courtesy of SPCOMNAV, Universitat Autònoma de Barcelona)

A virtualized software application is a program that can be executed regardless of the underlying computer platform. This can be achieved by packaging the application and all its software requirements (the operating system, supporting libraries and programs) in a single, self-contained software entity, which can be then run on any platform. An instance of a software-defined GNSS receiver executed in a virtual environment can then be called a virtualized GNSS receiver.

Early virtualization was in the form of full or machine virtualization (virtual machine or VM), which is a software application that emulates the hardware environment and functionality of a physical computer. With VMs, a software component called a hypervisor interfaces between the VM environment and the underlying hardware (CPU), providing the necessary layer of abstraction. A VM can run a full operating system, so conventional software applications (such as a software-defined GNSS receiver) can run within a VM without any required change.

Recently, the use of operating system virtualization or software containers has become more popular as they are often faster and more lightweight than VMs. Instead of a hypervisor, software containers use a daemon that supplements the host kernel, and can therefore be more efficient in making use of the underlying hardware. Examples of these software containers are Docker and Ubuntu Snaps. An example of an open-source software-defined GNSS receiver packaged as a Docker container is available.

Virtualized GNSS receivers bring important benefits in two fields: business-wise, as a technology enabler for new GNSS-based services; and also the use of GNSS SDRs as scientific tools, to ensure reproducibility.

As a service enabler, virtualized GNSS receivers allow for automatic and elastic creation, execution and destruction of application instances as required, and intelligent spread of the running instances across computing resources, regardless of processor architecture, host operating system or physical location. Several solutions are reported in the technical literature, many based on the GNSS snapshot-receiver, in which a short batch of data is sent to the software for position, velocity and time computation. Notable examples of such an approach are Microsoft’s energy-efficient GPS sensing with cloud offloading and the system running on Amazon Web Services. These approaches allow extremely low power consumption to the user equipment, at the expense of limited accuracy (ranging from 10 to 100 meters of error) and high latency. Commercially, Trimble offers Catalyst, a subscription-based GNSS receiver cloud-based service for which the user is charged according to the provided accuracy level, although the exact details are not yet public.

Virtualization technologies also offer a convenient solution for security-related applications (such as GPS M-code and Galileo PRS), since the encryption module remains on the service provider’s premises, and there is no need for a security module in the receiver equipment. This approach may enable the widespread use of restricted/authorized signals by the civilian population.

Finally, virtualization also offers important benefits for science. The flexibility of SDR receivers makes them an ideal tool for scientific experiments, since an implementation released under an open source license would allow a scientist to share a complete description of the processing from raw signal samples to the final research results.

STANDARDIZATION EFFORTS

GNSS signals are generally introduced to the front end through a standard interface, perhaps an SMA, MCX, or U.FL RF connector, and the digitized signals depart through another standard interface, perhaps USB, PCIe, or RJ45. However for a GNSS SDR, this is where the standardization ends. As discussed above, it is clear that there is a wide range of possibilities when capturing and digitizing a GNSS spectrum. Before processing this stream of digitized samples, details such as sample rate, center frequency, sample resolution and format/packing, and a variety of other parameters must be established. This is particularly important in a variety of scenarios such as when sharing/post-processing archived datasets in scientific applications, when offloading computational burden to a cloud-computer, or when interfacing different data-capture devices with different receivers. Ad-hoc methods of digitized data formats do not encourage interoperability and instead cultivate the potential for technology segmentation.

To address this challenge, The Institute of Navigation has lead an effort to develop a specification for standardized metadata, which would accurately and unambiguously describe the digitized data. Adoption of this metadata standard both by the data collection hardware and the software-defined radio receiver can promote interoperability, and can reduce the potential for error. Similarly, an SDR processor’s utility is extended when it is capable of supporting many file formats from multiple sources seamlessly. For more detail on the initiative, readers are encouraged to visit sdr.ion.org.

NEW FRONTIERS: GNSS SDRS IN SPACE

In space, GNSS receivers need to operate in scenarios that are quite different from those of ground-based receivers: higher (albeit predictable) dynamics conditions, low signal-to-noise-density ratios and poor positioning geometry. It is then an excellent scenario for SDRs, since it requires non-standard features from the receiver.

However, space is a harsh environment for semiconductor devices. Charged particles and gamma rays create ionization, which can alter device parameters. In addition to permanently damaging complementary metal-oxide semiconductor (CMOS) ICs, radiation may cause single-event effects, which are caused by ionizing radiation strikes that discharge the charge in storage elements, such as configuration memory cells, user memory and registers. When those effects happen, the system is usually recoverable with a power reset or a memory rewrite, but they also may destroy the device.

Until recently, radiation-hardened solutions were limited to application-specific integrated circuits or ASICs and one-time-programmable solutions. However, recently there has been an increase in the availability of space-grade FPGAs and memory devices. As examples, we can mention Xilinx’s Virtex-5QV, Microsemi’s RTG4 and Atmel’s ATF80 FPGA processors, and commercial SDR platforms such as GOMspace’s GOMX-3. Those devices allow the implementation of space-qualified GNSS receivers fully defined by software.

SDR receivers offer both reprogrammability (or upgradeability) and self-healing (or auto-remediation) capabilities. Examples could be the possibility to upload algorithms yet-to-be-invented at the receiver’s launch time, or the ability to recover from a single-event effect by remotely rewriting damaged functionalities, reducing the need of onboard redundancy.

THE ECONOMICS OF SDRS

Flexibility has a cost—and more flexibility costs more. This is why an FPGA implementation of a complex system can never compete with the unit cost of a fixed function ASIC. An example of a virtuous overlap might be seen in the Maxim 2120 and 2112 line of DVB-S2 TV receiver ICs, which have been successfully co-opted for GNSS SDR front ends due to their features (configurable mixers, gains, filters, operating power range and so on), which happen to be a good-enough match for the GNSS domain. On initial inspection, this allows for flexibility between the two application spaces and provides an ideal platform for SDRs supporting both TV decoding or GNSS on the same hardware radio module, but soon problems appear. The MAX21xx series are designed for TV applications, and TV applications tend to use 75-ohm input impedances while GNSS has standardized on 50 ohms. Certainly, one could add a software-defined impedance-selector block to the design, but we are now spending real hardware resources to accommodate SDR options. Adding an application that requires reception and transmission such as Wi-Fi, adds an entire signal chain to the design, as well as a large increase in the required dynamic range of the system. Adding an application that exploits MIMO, multiplies the hardware resources needed.

The flexibility of SDR makes it an indispensable research, development, validation and hobbyist tool, but system design is about target selection and trade-offs. To quote one of the most successful engineers of the current era and Eckert-Mauchly Award winner Dr. Robert P. Colwell: “Pick your [technical] targets judiciously. … Pick your vision and then chase it. You can’t pick everything as your vision, that’s a recipe for mediocrity. If you can’t pick your target you’re not going to hit any of them.” For SDR-based systems, this would seem to mean that we should focus on applications where the flexibility afforded offsets the inevitable platform cost push, or where it allows targets of opportunity that require a subset of the capabilities of the platform already being used.

At the same time, our earlier definition of an SDR as “a reconfigurable radio system whose characteristics are partially or fully defined via software or firmware” means that SDRs are already everywhere around us on some level. Cellular phones provide an example of devices that connect a large number of hardware radios to a dizzying array of applications that process, consume, modify and sometimes retransmit the received data, while consumer devices such as wireless routers can often add support for protocol changes or tweaks via firmware. While the economics might prevent radio systems from being universal on all dimensions, there are very few radio devices now sold that don’t expose at least a few parameters via software.

CONCLUSION

It seems that we are at an interesting epoch in the evolution of the software-defined GNSS receiver. The GNSS community has begun to springboard off developments and advances in RF equipment and is enjoying both an increase in functionality and a reduction in cost.

Simultaneously, the software-defined GNSS receiver architecture has morphed in multiple directions, enjoying virtually unlimited processing power of cloud computing, or availing itself of fully integrated RF and host-processor modules. As the use cases and host environments for GNSS receivers continue to diversify and the need for flexibility in the receiver continues to increase, it may be that the software-defined GNSS receiver emerges as a contender for the ASIC receiver for certain specialized use cases. Furthermore, as navigation is increasingly provided by an internet-connected device, the software-defined radio may even carve out its own niche, to become the go-to solution.

ACKNOWLEDGMENTS

The authors thank Sanjeev Gunawardena at the Air Force Institute of Technology and José López-Salcedo of Universitat Autònoma de Barcelona for their discussions and correspondence and for providing valuable insight and suggestions.


JAMES T. CURRAN received a Ph.D. in electrical engineering in 2010 from the Department of Electrical Engineering, University College Cork, Ireland. He is a radio-navigation engineer at the European Space Agency in the Netherlands.

CARLES FERNÁNDEZ-PRADES received an M.Sc. and a Ph.D. in electrical engineering from the Universitat Politecnica de Catalunya, Barcelona, Spain, in 2001 and 2006, respectively. In 2006, he joined Centre Tecnològic Telecomunicacions Catalunya, Barcelona, where he holds a position as senior researcher and serves as head of the Communications Systems Division.

AIDEN MORRISON received his Ph.D. in 2010 from the University of Calgary, where he worked on ionospheric phase scintillation characterization using multi-frequency civil GNSS signals. He works as a research scientist at SINTEF Digital in Trondheim, Norway.

MICHELE BAVARO received his master’s degree in computer science from the University of Pisa, Italy, in 2003. After working for several organizations including his own consulting firm, he was appointed as a technical officer at the Joint Research Centre of the European Commission in Brussels. He now works at Swift Navigation in San Francisco, California.

FURTHER READING


http://www.GIM-INTERNACIONAL.com
Viernes 19 de Enero del 2018

Conoce más sobre mi

conocemas

cuento

 

JUEGOS

Denuncias Públicas

denuncias

Consultor Internacional

consultor

Sociedad Colombiana de Topógrafos

sct

Ingeniería Mundial y Geomática

ingenieria

Cosas que no entiendo

cosas

Invitaciones

invitaciones

Mis Acciones en la SCI

SCI ACC

Recomendados del mes

recomendado

palilibrio