这是我在读论文的过程中总结的,内容会不断地增加,慢慢完善。主要针对计算机相关专业,如果有什么错误,请留言。我是传播知识的小蜜蜂
it lags behind its non-stochastic counterparts with respect to the convergence rate.
SVRG mitigates this shortcoming.
Several machine learning and optimization problems involve the minimization of a smooth, convex and separable cost function.
many more tasks in machine learning entail an optimization of similar form.
the volume of input data outgrows our computational capacity, posing major challenges.
it incurs a prohibitive cost for very large problems.
stochastic gradeint descent overcomes this hurdle by computing only a surrogate of the full gradient.
the problems would be multiplied manyfold.
the approximate ‘gradient’ of stochastic methods introduce variance in the course of the optimization.
A full gradient computation is occasionally interleaved with the inexpensive steps of SGD.
an infrequent computation of the full gradient may severely impede the progress of these variance-reduction approaches.
a variant of the popular SVRG scheme
cheapSVRG can be seen as a family of stochastic optimization schemes encompassing SVRG and vanilla SGD.
we supplement our theoretical analysis with experiments on synthetic and real data.
empirical evaluation supports our claims for linear convergence.
Explosive growth in data and availability of cheap computing resources have sparked increasing interest in Bing learning.
substantial recent developments
Besides, the explosive growth in volume, Big Data also has high velocity.
it would be painful and error prone to write the parallel code using the low-level system promitives.
the reasons for this are twofold.
if we knew that the event is certain to happen we would receive no infomation.
there is an intimate relationship between data compression and density estimation.
Data transmission can be reduced aggressively for energy saving without a large degradation of observation fidelity.
However, the optimal solution requires excessive computing and memory resources, which might be prohibitive for sensor nodes.
Therefore, we propose a lightweight greedy algorithm to solve it.
The sink node assembles all the received samples to reproduce the evolving process of certain physical phenomenon in the monitored area.
Several pioneering methods have been proposed to adjust the spatial sampling rata according to the statistic features of the monitored phenomenon.
Outlier detection should be an inseparable part of any data processing routine that takes place in WSNs.
We exemplify the benefits of our algorithm by implementing it using two different outlier detection heuristics.
Our algorithm is flexible in that it accommodates a whole class of unsupervised outlier detection techniques.
Data acquisition is an issue of ongoing attention for geographical information science.
Lifetime is defined as the number of rounds until the first sensor is drained of its energy.
The death of the sink node breaks down the functionality of the network.
Authors have proposed a hybrid scheme which combines the salient features of the in-network and grid-based aggregation schemes.
Even if one cluster head fails, the network may still be operational.
Through experiments on 13 different testbeds, encompassing 7 platforms, 6link layers, and multiple densities and frequencies.
This has spurred a substantial amount of research on wireless sensor networks over the past few years.
The second approach preserves the original information.
In the sequel, we thoroughly review each of the aforementioned functionalities.
Its performance in terms of aggregation effectiveness is largely inferior to that of the MST.
We do not propose any new protocols in this paper, but rather attempt to address at a higher level.
Sensor networks of the future are envisioned to revolutionize the paradigm of collecting and processing information in diverse environments.
Each sensor periodically produces information as it monitors its vicinity.
Replenishing energy via replacing batteries on hundreds of nodes is infeasible.
There is a vast amount of extant work on in-network aggregation in the literature.
It can potentially lead to significant performance gains.
Many evolving low-power sensor spatial density to enable reliable operation in the face of component node failures as well as to facilitate high spatial localization of events of interest.
Despite the existence of potential application, the conceptual importance of distributed source coding has not been mirrored in practical data compression.
We combine scheduling with transmission power control to mitigate the effects of interference.
TDMA is better fit for fast data collection.
Since the sink remains as the bottleneck, sending data over different paths does not reduce the schedule length.
The process of monitoring structures for the purpose of damage identification is known as structural health monitoring.
The adoption of WSNs in advanced structural health monitoring systems has proliferated in the last few decades because of their ability to operate reliably without human intervention in inaccessible areas.
The task scheduler must ensure that task allocation is matched to the available energy.
We demonstrate the superiority of our linear regression algorithm.
This platform comprises three boards.
As a burgeoning technique for signal processing, compressed sensing is being increasingly applied to wireless communications.
It has been tackled from various aspects since the outset of WSNs.
It complements other approaches and is deemed as the most crucial mechanism to achieve energy efficient data collection for WSNs.
Although data aggregation techniques have been heavily investigated, there are still imperfections to be improved on.
The complication involved in the interaction between data routing and CS-based aggregation has postponed the development on this front until very recently.
We prove the NP-completeness of MECDA in general through a tricky reduction.
We report a large set of numerical results, which validate the efficacy of our heuristic.
We first give a brief overview of applying compressed sensing in networking.
We coined this scheme as A in our previous work.
We denote by n and l the cardinalities of V and E.
This is because once CS aggregation is initiated, allowing additional raw data transmission would only hurt the energy efficiency.
As a byproduct, we also obtain the inapproximability of MECDA.
Other parameterized versions of CSATCP are intractable.
A randomized algorithm is borrowed from [18] to benchmark our heuristic in large scale WSNs.
We have shown the prominent improvement in energy efficiency of our method through numerical simulations.
WSNs are a class of ad hoc networks that has capability of self-organizing, in-network data processing, and unattended environment monitoring.
This feature scales down the cost of owning, programming and maintaining the WSNs.
We prioritized saving sensor’s energy over latency in receiving data.
The proposed scheme can cope with abnormal sensor readings gracefully.
It will bring a wealth of similar benefits as distributed source coding.
All these desired merits make compressive sampling a promising solution to the data gathering problem in large-scale wireless sensor networks.
If some sensor nodes in a network lose a piece of crucial information, other sensor nodes may come to the rescue by providing the missing data.
Scheduling of loads has also been categorized as either job scheduling or task scheduling.
DLT provides a tractable and practical approach to load scheduling problems involved in sensing, communication and computation aspects.
Divisible load theory bears some resemblance to energy based task allocations.
A is used for baselines.
Toward such a goal, we propose an energy-balanced task allocation such that the maximal energy dissipation among all sensor nodes during each period is minimized.
The widespread dissemination of small-scale sensor nodes has sparked interest in a powerful new database abstraction for sensor networks.
Query processing in large-scale sensor networks poses a number of challenges.
We refine our multi-query optimization algorithms to account for computational and memory limitations of sensor nodes.
The contribution of UMADE is complementary to this paper.
Running multiple queries in such an uncooperative manner will lead to bandwidth contention and even data loss as a result of transmission collisions.
Duplicate data requests from original queries can be eliminated as much as possible while guaranteeing the correctness of semantics of all queries.
harsh deployment environments
inject them into the sensor network
in a preconditioning step
intimately related
the forefront of current research
due to premature battery exhaustion
sensor malfunction
a faster depletion of its battery power
residual power
This paradigm shifts the focus from the traditional approaches to a more data-centric approach.