基于人文景观打造的地域振兴研究

来源 :第五届东方设计论坛暨2019 东方设计国际学术研讨会 | 被引量 : 0次 | 上传用户:guobihuai
下载到本地 , 更方便阅读
声明 : 本文档内容版权归属内容提供方 , 如果您对本文有版权争议 , 可与客服联系进行内容授权或下架
论文部分内容阅读
进入新时代以来,地域振兴问题一直成为中国传统去工业化城市的难题,而人文景观是解决城市更新的重要突破口.英国利物浦城市的城市更新,经历了从急剧衰败到地域振兴的过程,人文景观起到了重要的作用.文章从人文景观打造的角度,通过对利物浦案例的发展历史、振兴政策、发展动因进行深入分析,剖析中国城市地域振兴现存的问题,提供从“文化振兴”到“地域振兴”的可供借鉴的经验,推动整体城市的发展.
其他文献
In recent years,machine reading comprehension is becoming a more and more popular research topic.Promising results were obtained when the machine reading comprehension task had only two inputs,context
Most of the current man-machine dialogues are at the two end-points of a spectrum of dialogues,i.e.goal-driven dialogues and non goal-driven chitchats.Document-driven dialogues provide a bridge betwee
Natural language inference(NLI)is a challenging task to determine the relationship between a pair of sentences.Existing Neural Network-based(NN-based)models have achieved prominent success.However,rar
In this paper,we present a neural model to map structured table into document-scale descriptive texts.Most existing neural net-work based approaches encode a table record-by-record and generate long s
Word embeddings have a significant impact on natural lan-guage processing.In morpheme writing systems,most Chinese word em-beddings take a word as the basic unit,or directly use the internal structure
Dropped pronoun recovery,which aims to detect the type of pronoun dropped before each token,plays a vital role in many applications such as Machine Translation and Information Extraction.Recently,deep
Distant supervision for relation extraction has been widely used to construct training set by aligning the triples of the knowledge base,which is an efficient method to reduce human efforts.However,th
In relation extraction,directly adopting a model trained in the source domain to the target domain will suffer greatly performance decrease.Existing studies extract the shared features between domains
Language model pre-training has proven to be useful in learning universal language representations.As a state-of-the-art language model pre-training model,BERT(Bidirectional Encoder Representations fr
Recently,a self-attention based model,named Transformer,is pro-posed in Neural Machine Translation(NMT)domain,and outperforms the RNNs based seq2seq model in most cases,hence it becomes the state-of-t